Google I/O saw some interesting innovations which could make the future of wearable technology strong and also make available motion controller in the market obsolete. Google’s Advanced Technology and Projects (ATAP) division, announced Project Jacquard which will build wearable technology into regular clothing. Another announcement that was made by Google was Project Soli which is a motion controller that could change the way we interact with gadgets as well as other everyday objects. So, what do these two projects have in store for us? Let us have a look at this post to know more.
Google partnered with Levi’s for this project to develop “interactive textiles.” The aim is to make the concept of electronic clothing a practical one. The way clothing is made to knit circuitry into fabrics will change and it will still allow normal tailoring and provide care and give the customers the option to choose clothing as per their needs.
Conductive fibers are woven perfectly into the fabric you want. Sensors are enabled into your favorite denim with Jacquard. Google’s Project Jacquard will design denim which will have intelligence in it and consumers are bound to love them.
Apart from the partnership with Levi’s, Google will get into the next step in technological innovation. This will be possible as conductive fibers can be weaved into fabric along with minute circuits which will be smoothly integrated into the clothing.
This project will bring in gesture control, which is not a new concept but will be brought to the users in a new way. In 1920s, a device was developed by Leon Theremin, which used hand gestures to play music. However, the device didn’t go mainstream. Over the years, technology has developed significantly and now touch interfaces are a common thing.
The unique thing about Project Soli is that the gesture tracking depends on radar. Motion tracking is given a different approach. Radar identifies objects which are in motion using high frequency radio waves.
Soli’s sensors can capture motion at up to 10,000 frames per second, which makes it more accurate than the systems which are camera-based. Radars, unlike cameras can pass through specific objects, thereby making it adaptable to more form factors than a camera.
What do you think about these innovations? Which one do you like the best? We would love to hear your thoughts in the comments below.