Yamanashi: Day 2 – Campus Tour

For the time period of the exchange program our teacher would stay at the same accomodation as we did. So we all started our first day together by walking to the university. When we arrived, we had to clear some organizational procedures before we could receive the scholarship of 80,000 Yen. This took much longer than expected and thus we unfortunately started Japanese classes quiet late. Lessons covered topics such as how to order in restaurants and reading an Izakaya menu. Lunchtime started at 12 pm and the cafeteria happily reminded me of the time at Keio University. Having the same principal, you could either order a dish e.g. bowl topped with pork or select individual items such as rice, miso soup or a pork cutlet.

After lunch, we were introduced to their AI faculty and laboratory. There are three branches, with the first one specializing in visual Deep Learning. The use cases presented to us were concerned with object detection and classification of the quality of fruits. Japanese people give great attention to the quality of their fruits. For instance, in the case of grapes, a technique called “berry thinning” is used to enhance the size and quality of the remaining grapes. Typically, only specialized, experienced workers can do this, but due to the rarity of such workers, DL has been implemented to help less skilled workers with the use of AR glasses to perform the same task. A similar use case involves strawberries, but the goal here is to remove overripe strawberries from the plant through a mix of gamification and crowdsourcing. With the help of AR glasses, individuals are notified of the overripe strawberries and the direction of the break. A mishandling would then result in a penalty of the overall score.The second department focused on NLP. One project introduced to us was automated peer reviewing of papers based on criteria such as originality or scientific contribution. However, as I had already experienced, parsing text from a PDF file is quite complicated. Therefore, the presented prototype only works on a specific template format and does not include graphics in its scoring yet.

The third department, led by a professor with a focus on mechanical engineering, included auditory and visual use cases. For example, they were working on reducing background noise of sports commentary, creating new handwritten Hiragana data for handwritten text recognition using a complex AutoEncoder setup, and classifying grape color and readiness for harvesting, which is especially helpful when utilizing harvesting robots.

Personally, I found these use cases extremely interesting and was impressed by the international teams and their deep technical knowledge. Comparing it to my department, it seemed that we were only scratching the surface of ML and advanced data analytics. After classes, I joined Mei, Leng, and David for a short walk to a nearby hill near our accommodation, and we ended up having takoyaki together.

Leave a Comment

Your email address will not be published. Required fields are marked *