The smart home software Home Assistant can now also control devices in the home using large language models, or LLMs. This makes it possible to give more complex voice commands.
Home Assistant, like other smart home software, uses an AI-based voice assistant that understands voice commands, can perform the requested action and provide a response. Home Assistant uses its 'conversation agent' for this as standard. In addition, it has been possible since the beginning to have a conversation with a voice assistant based on an LLM. But users couldn't use that assistant to control devices in their home.
This will change with the new version of Home Assistant, the company writes in an announcement. From now on, there is also a conversation agent based on an LLM, which can handle much more complex commands. For example: “I'm going into a video meeting, can you make sure people can see my face?” The new voice assistant can understand the intention behind these types of commands and take the corresponding action, the organization claims.
Users can choose whether they use the traditional voice assistant or the voice assistant based on an LLM. Anyone who chooses an LLM assistant can then also decide for themselves which LLM will be used. Home Assistant already has integrations with OpenAI, the maker of ChatGPT, and Google AI. There is also a possibility to run local LLMs, but they cannot yet control houses. Home Assistant is working with Nvidia to make this possible at a later date.
Home Assistant was founded in 2013 by the Dutchman Paulus Schoutsen. Tweakers spoke to him last May during the State of the Open Home stream.
Leave a Reply
You must be logged in to post a comment.