This is an Android task automation app that allows you to script a set of instructions (a flow) that takes information from the user and device sensors and sends them to a web server. The data stored there can then be accessed from a suitably scripted prim and used for configuration. So, for example, you can rotate your Android device (Nexus 7 in my case) and have the prim rotate inworld or you can say the word "green" and via Android speech recognition have the prim turn green.
Of course, you can also send information the other way, i.e. from the prim, and have Automagic speak and parse the response and select another flow to activate. In principle flows can be downloaded automatically, triggered, for example, by an NFC tag. Although avatar movement isn't controllable (it could be faked, I guess), you might be able to force teleport.
As geolocation is supported via GPS etc, it's also feasible to think in terms of ambient use in conjunction with the Lumiya viewer. I'd hoped that the Lumiya viewer might eventually support some of these features but development in that area seems to have stalled.
What this provides is a possible second screen/sit back environment that might usefully complement inworld educational scenarios using BYOD tech. Of course, it does require a server (for the moment) and has possible latency, security and battery issues; a dedicated app would be better. It's perhaps less immersive than keyboard/mouse use but that's not always a bad thing. It's probably also not appropriate to multi-user scenarios although I think it might work well with pairs of students, one controlling the Android device, the other the PC.
I should emphasise that this hasn't been field-tested with students but I thought it an interesting concept worth sharing. Automagic is commercial but there is a free trial version.
Automagic Android app
-
- Posts: 1314
- Joined: Sun Dec 23, 2012 2:26 pm
- Has thanked: 1134 times
- Been thanked: 1142 times
-
- Posts: 1314
- Joined: Sun Dec 23, 2012 2:26 pm
- Has thanked: 1134 times
- Been thanked: 1142 times
Re: Automagic Android app
Just to confirm that you can indeed use Automagic with Lumiya to achieve some measure of cross-reality, i.e. movement in the real world causes a corresponding change in position in the virtual world.
The obvious trick would be to use a scripted vehicle, i.e. seated avatar, but Lumiya does not support the avatar camera in this mode. An option that does work, however, is to move a prim to the place corresponding to the GPS coords and then use a sensor-activated script in it to force-teleport the avatar to the prim. I wouldn't claim this to be a high fidelity experience; there are issues with latency, GPS accuracy, weather (both sunlight and rain), not to mention Lumiya swapping from 3D to text mode during teleports so you have to swap back manually. The latter means you want to update position at most every 30 s but in that time you can walk far enough to lose the sensor/tp prim (there are ways around that).
Update: OK, I forgot to mention that CJ Davies at St Andrews has been doing this much more proficiently: see http://cjdavies.org/?p=1505
The obvious trick would be to use a scripted vehicle, i.e. seated avatar, but Lumiya does not support the avatar camera in this mode. An option that does work, however, is to move a prim to the place corresponding to the GPS coords and then use a sensor-activated script in it to force-teleport the avatar to the prim. I wouldn't claim this to be a high fidelity experience; there are issues with latency, GPS accuracy, weather (both sunlight and rain), not to mention Lumiya swapping from 3D to text mode during teleports so you have to swap back manually. The latter means you want to update position at most every 30 s but in that time you can walk far enough to lose the sensor/tp prim (there are ways around that).
Update: OK, I forgot to mention that CJ Davies at St Andrews has been doing this much more proficiently: see http://cjdavies.org/?p=1505
Graham Mills wrote:This is an Android task automation app that allows you to script a set of instructions (a flow) that takes information from the user and device sensors and sends them to a web server. The data stored there can then be accessed from a suitably scripted prim and used for configuration. So, for example, you can rotate your Android device (Nexus 7 in my case) and have the prim rotate inworld or you can say the word "green" and via Android speech recognition have the prim turn green.
Of course, you can also send information the other way, i.e. from the prim, and have Automagic speak and parse the response and select another flow to activate. In principle flows can be downloaded automatically, triggered, for example, by an NFC tag. Although avatar movement isn't controllable (it could be faked, I guess), you might be able to force teleport.
As geolocation is supported via GPS etc, it's also feasible to think in terms of ambient use in conjunction with the Lumiya viewer. I'd hoped that the Lumiya viewer might eventually support some of these features but development in that area seems to have stalled.
What this provides is a possible second screen/sit back environment that might usefully complement inworld educational scenarios using BYOD tech. Of course, it does require a server (for the moment) and has possible latency, security and battery issues; a dedicated app would be better. It's perhaps less immersive than keyboard/mouse use but that's not always a bad thing. It's probably also not appropriate to multi-user scenarios although I think it might work well with pairs of students, one controlling the Android device, the other the PC.
I should emphasise that this hasn't been field-tested with students but I thought it an interesting concept worth sharing. Automagic is commercial but there is a free trial version.