Last week, I attended the HRI (Human-Robot Interaction) conference held in Daegu, Korea. Since I started my Master’s program, I’ve been reading so many papers from this conference even after I changed my major to AI during PhD. Unfortunately, I never had the opportunity to attend this conference. But this time, it was in Korea! So I submitted a short paper and it was accepted as LBR (late-breaking report).

Continue reading

2018년 12월 17일. Yale대학교의 Brian Scassellati 교수님 (a.k.a. scaz)이 우리 연구실을 방문했다. 방문 목적은 다름이 아닌 같은 연구실 박사과정 나딘의 박사 프로포절 심사 때문. 나딘이 외부 심사원을 찾고 있을 때, 내가 이 교수님을 추천해줬는데, 마침 스케쥴링이 잘 되서 이번에 멀리 오키나와까지 오신듯하다.

이 교수님이 운영하는 예일대학교 소셜 로보틱스 랩은 박사과정으로 너무나 가고 싶었던 연구실 중 하나였다 (근데 예일대학교 스크리닝 프로세스에서 걸러지는 바람에, 교수님한테 메일 한 통 못해봤었다..). 내가 관심이 많던 Social Robotics, 그리고 Socially Assistive Robotics를 하시는 분인데, 박사과정 지도교수님으로 만나지는 못했지만 이렇게 여기 오키나와에서 만나고 또 내 연구를 소개하는 자리를 가져서 너무 좋았다.

Continue reading

A few months ago, I attended a conference called UR 2018 (Ubiquitous Robots). I presented one of my works [1] about (kind of) computational creativity. This project started when I first joined the PhD Program. Due to some reasons, it has been discontinued for several years and I recently started again. I couldn’t develop it further (yet). Still, there is some interesting stuff that I’d like to share.

Continue reading

Part 1. What It Is and Why We Do It

In this posting, I’ll introduce an interesting method that we often use in our research, called the Error Regression Scheme (ERS) [1-2]. In short, the ERS is a sort of online optimization technique, but it is different from other techniques in several ways. For instance, during the ERS, the weights are not updated. Instead, the neuron’s actual values are updated to minimize the error at the output. The ERS is a kind of prediction error minimization mechanism and there are several (philosophical) thoughts behind it. I’ll talk about them later in other postings. Let’s begin with what it is and why we use it.

Continue reading

One of the difficulties that I faced when I joined Cognitive Neurorobotics lab was that I wasn’t familiar with the terms used in the lab. Some terms are from the field of dynamics and some other terms were “coined” by my advisor (Prof. Tani). So, it took me quite a time to understand them. I guess it might be a bit more difficult for other people sometimes.

So, I’d like to briefly explain those terms that can be frequently found in my studies on “cognitive neurorobotics” or in Tani’s book (“Exploring Robotic Minds Actions, Symbols, and Consciousness as Self-Organizing Dynamic Phenomena”). This post is targeted to the general audiences (someone like me five years ago). So those terms won’t be explained in a great detail. Instead, I’ll just try to give a general idea about them.

Continue reading

When I train my neural network models, I often use the method called “Softmax Transformation”. It is the method of representing the training data into the sparse form. When I first learned how to do it, I had some troubles ‘cause there wasn’t enough example about how to do it. And still, I can’t find nice explanation about the softmax transformation with examples. So here I have some brief explanation and sample codes for the softmax transformation. Let’s see how the softmax transformation works step by step.

Continue reading
Author's picture

Jungsik Hwang

황중식, 물결, mulkkyul, Jungsik