That’s the benefit of using AI and machine learning - once you have enough source material, you can throw it all in and it’ll eventually spit out a model.
Which is exactly what Meta did with their Massively Multilingual Speech project which supports text-to-speech and speech-to-text for 1107 different languages.
Is it actually any good in 99% of them, I don’t have a clue, but it exists.
Seems more like a proof of concept project for that paper than something they are pursuing seriously judging by the GitHub location in some example folder that hasn’t seen any significant updates in over a year. If it is so great I would assume they would pursue it more actively and replace existing models with it two years later.
That’s the benefit of using AI and machine learning - once you have enough source material, you can throw it all in and it’ll eventually spit out a model.
Which is exactly what Meta did with their Massively Multilingual Speech project which supports text-to-speech and speech-to-text for 1107 different languages.
Is it actually any good in 99% of them, I don’t have a clue, but it exists.
Seems more like a proof of concept project for that paper than something they are pursuing seriously judging by the GitHub location in some example folder that hasn’t seen any significant updates in over a year. If it is so great I would assume they would pursue it more actively and replace existing models with it two years later.