Simon Bergweiler and myself decided to continue our research in the field of interaction with the Internet of Services. The result is Calisto, a 40’’ multitouch terminal that can be used either over touch interactions or by speech. What’s pretty cool is that users can send (or let me say throw) picture they shot with their Android phone with a Frisbee gesture directly to the terminal.

Once synchronized, the picture is analyzed and corresponding semantic annotations are attached to the media. At this point users can use Spotlet to access the semantic web – like in our other system CoMET (http://www.mat-d.com/site/web-3-0-innovative-semantic-interactions-with-spotlets/ ) – and retrieve interesting linked information over drag’n’drop or via speech input through their own Android mobile phone.

The graphical user interface was built using Adobe AIR and the multimodal dialog system was developed at DFKI (German Center for Artificial Intelligence). The Android phones are connected over wireless to the terminal.

This terminal also shows which new interactions with the semantic web (aka Web 3.0) will be possible in the future.

💬 Comments

Don't hesitate to share your comments. I'm always happy to read your input!