There can never be talk of advancement if we don’t speak of artificial intelligence (AI) every now and then. Speaking of technological innovation and AI, such is the case with Facebook’s ongoing development of AR wearables. Will it give you a super-advanced digital secretary, such as the one in (Tom Holland) Spider-Man’s suit? No, not really.

Facebook outlined that their new machine learning process, called ‘Anticipate Video Transformer’ (AVT), will be able to predict future actions based on visual interpretation. So while it can’t automatically connect you to the International space station, it would at least be able to tell what the likeliest next action a person will take based on how it analyzes what that person is currently looking at.

As Facebook puts it:

“AVT could be especially useful for applications such as an AR “action coach” or an AI assistant, by prompting someone that they may be about to make a mistake in completing a task or by reacting ahead of time with a helpful prompt for the next step in a task. For example, AVT could warn someone that the pan they’re about to pick up is hot, based on the person’s previous interactions with the pan.”

Robots Rule

The implications here could help provide a pivotal turning point in considering the practical use of ‘Smart’ devices and applications. In the context of AR Glasses at least, their applications can include a range of mundane to specialized tasks either at home or at work. In that regard, Facebook’s AR Wearables might prove revolutionary in fields such as medical practice, engineering, and design and manufacturing.

Facebook attests to the reliability of their model, stating that:

“We train the model to predict future actions and features using three losses. First, we classify the features in the last frame of a video clip in order to predict labeled future action; second, we regress the intermediate frame feature to the features of the succeeding frames, which trains the model to predict what comes next; third, we train the model to classify intermediate actions. We’ve shown that by jointly optimizing the three losses, our model predicts future actions 10 percent to 30 percent better than models trained only with bidirectional attention.”

Even if it is just at 10% increased efficiency, that number in itself is a defining value. Since it’s stated in comparison to the efficiency of other, potentially already existing and enforced models, the implication is that Facebook technically has an experimental AI that’s already 10% better than whatever is the leading model.

The Wrap

Development of such software is enough to get even the likes of Elon Musk interested, although we doubt that Facebook will try and expand operations to Mars anytime soon. Regardless, something like this does bring with it endless possibilities. Facebook seems to be following the success of the Google Glass, at least as key tools in industrial workspaces, and looks to further improve on added utilities and spatial recognition and awareness.

It does sound like something ripped out of a sci-fi movie, but that only serves to prove that the future might be closer than we think. Using a tire-replacement scenario as an example, Facebook highlights that their AR glasses won’t just help guide physical action, but can also remind you of certain daily routines based on its assessment of your current action and location. If you have a vivid imagination, then just think of where something like that can lead to with continuous development.

Subscribe to our ‘Bottoms Up!’ Newsletter. Get the latest social media news, strategies, updates and trends to take your business to the highest level.


Sources

https://bit.ly/3vbfjKk