Apple on Tuesday announced a series of new accessibility tools for the iPhone and iPad, including a feature that promises to replicate a user’s voice for phone calls after only 15 minutes of training.
The new tool is called Personal Voice, where users will be able to read text prompts to record audio and have the technology learn their voice.
A related feature called Live Speech will then use the “synthesized voice” to read the user’s typed text aloud during phone calls, FaceTime conversations and in-person conversations. People will also be able to save commonly used phrases to use during live conversations.
The feature is one of several aimed at making Apple’s devices more inclusive for people with cognitive, vision, hearing and mobility disabilities. Apple said people who may have conditions where they lose their voice over time, such as ALS (amyotrophic lateral sclerosis) could benefit most from the tools.
According to Apple, the feature will be rolled out later this year.
Other tech companies have experimented with using AI to replicate a voice. Last year, Amazon said it was working on an update to its Alexa system that would allow the technology to mimic any voice, even a deceased family member. (The feature has not yet been released).