So, I prepared the system for loading voice over lines when a new line of text. It works quite well: it asks for a line from the Ink narrative system, then waits for the audio clip associated with that line’s tag to complete playing before either loading a new clip to play or to display the choices screen.

However, after that, I spent a lot of time trying to improve the gesture recognition based on the magnetic field date. I didn’t have much success. Even pre-processing the date in different ways, and making sure I could visualize it effectively (with the Monitor Components interface)., I was unable to get a better rate of success from the gesture recognizer: it could not consistently find a gesture before it would start confusing between them as soon as I added more than 2 to the training set.

I believe I have basically one course of action to try out. That would be running the game with stronger magnets, possibly arranged so that one of the poles points directly at the mobile. I’ll try it out next time I’m at Concordia.

Moving on from OSC prototyping

In order to be able to experiment with the sensors reading on a standalone phone app (instead of communicating with a PC game host), I’m going to implement the unity-android-sensors library into the project. (…) This worked pretty well, and to do that I had mostly to add a few components and change how they are wired together. There wasn’t a lot of changes to the already existing components.