A few months ago, I attended a conference called UR 2018 (Ubiquitous Robots). I presented one of my works [1] about (kind of) computational creativity. This project started when I first joined the PhD Program. Due to some reasons, it has been discontinued for several years and I recently started again. I couldn’t develop it further (yet). Still, there is some interesting stuff that I’d like to share.

Continue readingA few days ago, JNNS2018 (The 28th Annual Conference of the Japanese Neural Network Society) was held at OIST. I was given an opportunity to present my previous work on deep visuomotor learning. And here is the slides that I used.

Continue readingIn our study, we often mention ‘open-loop’ and ‘closed-loop’ generation. Although this is very popular in control theory, it is relatively rare in our field - cognitive neurorobotics. Here in this posting, I’ll briefly introduce what it is and how we use it.

Continue readingThe tensorflow implementation of P-VMDNN has been just released! You can find it here: https://github.com/mulkkyul/p-vmdnn_tensorflow

Continue reading## Part 2. How We Do It

In the previous posting, I briefly introduced the Error Regression Scheme (ERS). In this posting, I’ll talk a bit more about how we actually use the ERS in our experiments.

Continue reading## Part 1. What It Is and Why We Do It

In this posting, I’ll introduce an interesting method that we often use in our research, called the Error Regression Scheme (ERS) [1-2]. In short, the ERS is a sort of online optimization technique, but it is different from other techniques in several ways. For instance, during the ERS, the weights are not updated. Instead, the neuron’s actual values are updated to minimize the error at the output. The ERS is a kind of prediction error minimization mechanism and there are several (philosophical) thoughts behind it. I’ll talk about them later in other postings. Let’s begin with what it is and why we use it.

Continue readingOne of the difficulties that I faced when I joined Cognitive Neurorobotics lab was that I wasn’t familiar with the terms used in the lab. Some terms are from the field of dynamics and some other terms were “coined” by my advisor (Prof. Tani). So, it took me quite a time to understand them. I guess it might be a bit more difficult for other people sometimes.

So, I’d like to briefly explain those terms that can be frequently found in my studies on “cognitive neurorobotics” or in Tani’s book (“Exploring Robotic Minds Actions, Symbols, and Consciousness as Self-Organizing Dynamic Phenomena”). This post is targeted to the general audiences (someone like me five years ago). So those terms won’t be explained in a great detail. Instead, I’ll just try to give a general idea about them.

Continue readingWhen I train my neural network models, I often use the method called “Softmax Transformation”. It is the method of representing the training data into the sparse form. When I first learned how to do it, I had some troubles ‘cause there wasn’t enough example about how to do it. And still, I can’t find nice explanation about the softmax transformation with examples. So here I have some brief explanation and sample codes for the softmax transformation. Let’s see how the softmax transformation works step by step.

Continue readingLast September, I attended the ICDL-EPIROB 2017 which was held in Lisbon, Portugal. I really enjoyed the conference and the beautiful city and I also had a great time with my old & new friends :)

This time, I was lucky to have an opportunity for the oral presentation. I had my presentation on my recent study on visuomotor learning, particularly on P-VMDNN (Predictive Visuo-Motor Deep Dynamic Neural Network).

The manuscript, the presentation slides (pdf) and the supplementary videos that I used during my talk can be downloaded from below:

Continue reading