I mean these kinds of “AI companions” are grifts anyway. They won’t take off because they are a solution looking for a problem. They aren’t as affordable as the entry level HomePod/Amazon Pod/Google Home units, so they can’t be bought as a “why not, and it’s a speaker anyway” type thing. They don’t have any secondary functionality you don’t already have in your phone.
And if that’s not enough, you can bet your cute arse on that Apple and Google are both working on bringing LLM functions into their assistants, basically making these units obsolete.
The moment that these companies decide that they can’t afford to pay for servers and API subscriptions anymore, the service will die and you’ll end up with a colourful brick. Don’t buy these things, they’re unfinished and will die within a year or two.
The ultimate issue is exactly what you said; phones exist. I’m not carrying another voice assistant around when both Siri and Google Assistant can be installed on my phone.
Based on MKBHD’s review this whole product category definitely screams “solution in search of a problem”
Like, I can imagine a world where a smart watch replaces my phone for day to day stuff, but that’s because I’m in that weird space where I prefer a laptop for almost anything serious, but still appreciate the convenience and functionality of remaining connected wherever I am, even if I’m on the move.
But another device I need to keep in my pocket? What’s the point?
Rabbit has a SIM slot. I think the idea is that once its software gets better, it will be able to be a replacement for a phone for people who just want to quickly do simple things. Its battery seems to be pretty rubbish, though, and for now, the software is not nearly good enough.
But you can literally buy a cheap android phone for less than this device that does everything it does (and might do some day), maybe even better. Why buy a strange and unfamiliar form factor, when most people are comfortable with a smartphone already? They can just choose not to interact with anything other than the assistant if they really want to, and still be better off.
I agree, fairly gimmicky, but I do like the idea of being able to press a single button to ask a quick question. I like my meta glasses for the same reason, but they need some improvement, and quite frankly, I’d like them a whole lot more if they were from someone other than meta. Also, I like the smallest of it. If I could get away with carrying just a tiny box, sometimes I’d do that. The software on it needs to get much better, so hopefully, they stick with it.
On Pixel (but probably also other phones) you can press and hold the power button to summon the assistant. Put chatgpt or whatever as your assistant and you have a rabbit equivalent with one button summon.
Great point! Here are samsung instructions for this.
Download chatgpt from play store (ensure its by open ai and not a scam app). Set it up and make sure you have access to the voice feature
Download good lock from galaxy store (NOT play store)
In the good lock app, In the “life up” section, download the “RegiStar” module.
Open the RegiStar module and click the “side key press and hold action” setting. Turn it on
In the options underneath, choose “open app”. Then scroll to the chat gpt app in the list, and click the setting icon next to the name. Then click “voice”.
Now you should be able to long press the side button to directly access the chatgpt voice assistant.
The rabbit is also just an android apk. You could literally install the rabbit on a cheap phone if you’d like. It’s beyond useless.
What someone needs to do is put something similar into something all cutesy like a Furby, and sell it for kids. Just a $100 wifi only PG rated thing that can do some fun stuff. It wouldn’t change the world, but it could run a few years of actual profiting and not feel like a rip-off.
Yeah, build this into a watch or Earbud that I already have on person for other reasons but gives me hands free access to a decent AI when I don’t have my phone on me, and I might have some interest.
In the early days of laser development, it was seen as a solution seeking a problem. A few decades later, it actually turned out to be really handy, but it would have been tough to sell this idea to anyone before that. Imagine how hard it is to find funding for research that solves a problem that doesn’t exist.
They’re a solution looking to solve a problem that already has a well established better solution.
The modern smart phone and voice assistats have been around for 14+ years…
For all these Ai devices can currently accomplish, our budget $200 phones can do an unmeasurable amount more.
If anyrhing, they should be focusing on the voice assistant aspect - “Hey google, add nearest gas station to my trip” “Here’s a list of gas stations (I know you’re driving but please review this list and select one using the tiny select button)” {presses button} “Please enable location data analytics to continue”
That wouldn’t surprise me. I think there’s a Siri shortcut for integrating with ChatGPT. It’s not the most elegant of solutions but it works well enough.
I’m quite sure that this year we’ll see whatever Google and Apple has cooked up in terms of machine learning integration into the operating systems. Likely a flagship feature of the new Pixel phones, and definitely a significant Siri update on iPhone, probably along with some gimmicky feature to sell the new 16 Pros.
At that point, who is going to care about these devices?
In addition to being able to run the exact same thing on that phone you already have, too.
Their device does not have any specific hardware for their usage. Even if Google and Apple don’t bring any improvement to their own solution, soon enough someone is bound to just provide an “assistant AI app” with a subscription, proxying openai requests and using the touchscreen, camera, micro and speaker that are already there instead of making you buy a new set of those.
Yes, there is. And yes, it would be huge. I know a lot of people that are staying away from all this as long as the privacy issues are not resolved (there are other issues, but at this point, the cat is out of the bag).
But running large models locally requires a ton of resource. It may become a reality in the future, but in the meantime allowing more, smaller provider to provide a service (and a self-hosted option, for corporation/enthusiasts) is way better in term of resources usage. And it’s already a thing; what needs work now is improving UI and integrations.
In fact, very far from the “impressive” world of generated text and pictures, using LLM and integrations (or whatever it is called) to create a sort of documentation index that you can query with natural language is a very interesting tool that can be useful for a lot of people, both individual and in corporate environment. And some projects are already looking that way.
I’m not holding my breath for portable, good, customized large models (if only for the economics of energy consumption) but moving away from “everything goes to a third party service provider” is a great goal.
I mean these kinds of “AI companions” are grifts anyway. They won’t take off because they are a solution looking for a problem. They aren’t as affordable as the entry level HomePod/Amazon Pod/Google Home units, so they can’t be bought as a “why not, and it’s a speaker anyway” type thing. They don’t have any secondary functionality you don’t already have in your phone.
And if that’s not enough, you can bet your cute arse on that Apple and Google are both working on bringing LLM functions into their assistants, basically making these units obsolete.
The moment that these companies decide that they can’t afford to pay for servers and API subscriptions anymore, the service will die and you’ll end up with a colourful brick. Don’t buy these things, they’re unfinished and will die within a year or two.
The ultimate issue is exactly what you said; phones exist. I’m not carrying another voice assistant around when both Siri and Google Assistant can be installed on my phone.
Based on MKBHD’s review this whole product category definitely screams “solution in search of a problem”
Like, I can imagine a world where a smart watch replaces my phone for day to day stuff, but that’s because I’m in that weird space where I prefer a laptop for almost anything serious, but still appreciate the convenience and functionality of remaining connected wherever I am, even if I’m on the move.
But another device I need to keep in my pocket? What’s the point?
Rabbit has a SIM slot. I think the idea is that once its software gets better, it will be able to be a replacement for a phone for people who just want to quickly do simple things. Its battery seems to be pretty rubbish, though, and for now, the software is not nearly good enough.
But you can literally buy a cheap android phone for less than this device that does everything it does (and might do some day), maybe even better. Why buy a strange and unfamiliar form factor, when most people are comfortable with a smartphone already? They can just choose not to interact with anything other than the assistant if they really want to, and still be better off.
I agree, fairly gimmicky, but I do like the idea of being able to press a single button to ask a quick question. I like my meta glasses for the same reason, but they need some improvement, and quite frankly, I’d like them a whole lot more if they were from someone other than meta. Also, I like the smallest of it. If I could get away with carrying just a tiny box, sometimes I’d do that. The software on it needs to get much better, so hopefully, they stick with it.
On Pixel (but probably also other phones) you can press and hold the power button to summon the assistant. Put chatgpt or whatever as your assistant and you have a rabbit equivalent with one button summon.
Great point! Here are samsung instructions for this.
Download chatgpt from play store (ensure its by open ai and not a scam app). Set it up and make sure you have access to the voice feature
Download good lock from galaxy store (NOT play store)
In the good lock app, In the “life up” section, download the “RegiStar” module.
Open the RegiStar module and click the “side key press and hold action” setting. Turn it on
In the options underneath, choose “open app”. Then scroll to the chat gpt app in the list, and click the setting icon next to the name. Then click “voice”.
Now you should be able to long press the side button to directly access the chatgpt voice assistant.
The rabbit is also just an android apk. You could literally install the rabbit on a cheap phone if you’d like. It’s beyond useless.
What someone needs to do is put something similar into something all cutesy like a Furby, and sell it for kids. Just a $100 wifi only PG rated thing that can do some fun stuff. It wouldn’t change the world, but it could run a few years of actual profiting and not feel like a rip-off.
Good luck making an AI you are 100% sure is PG rated.
Btw someone already put chatgpt+whisper in a kid’s plushie/toy, saw it on an old WAN show. The lag is tremendous though
But it’s just an Android app in a dedicated device that reviews say has a shit interface and battery.
Run it on a cheap phone that does more for less.
The battery part is fixed now 😂 they were able to give that thing 5x battery lifetime trough a software update
Makes me wonder what they where doing in the background prior this update
Former crypto company… Power drain… I feel like there’s an answer here…
😂that would be hell of a scam
That’s amazing, I hadn’t heard about the battery fix!
Yeah, build this into a watch or Earbud that I already have on person for other reasons but gives me hands free access to a decent AI when I don’t have my phone on me, and I might have some interest.
What phone is that that supports both Siri and Google Assistant on the same device?
iPhones only, basically. Google Assistant is available through an app, but that’s still more convenient than buying a $200 device
Absolutely a grift.
The CEO is a fucking joke. This is their bio on linkedin.
The resume of someome who had never done an honest day’s work in their life.
Only the first item is related to business, and even that implies repeated failure.
Solutions looking for problems is a mainstay in multiple industries from material science to chemistry. It’s not necessarily a bad idea.
In the early days of laser development, it was seen as a solution seeking a problem. A few decades later, it actually turned out to be really handy, but it would have been tough to sell this idea to anyone before that. Imagine how hard it is to find funding for research that solves a problem that doesn’t exist.
In development and science, sure. But this is a finished product on the market.
The principle is the same. “Let’s hope someone finds this useful.” It’s always a crapshoot.
They’re a solution looking to solve a problem that already has a well established better solution. The modern smart phone and voice assistats have been around for 14+ years…
For all these Ai devices can currently accomplish, our budget $200 phones can do an unmeasurable amount more.
If anyrhing, they should be focusing on the voice assistant aspect - “Hey google, add nearest gas station to my trip” “Here’s a list of gas stations (I know you’re driving but please review this list and select one using the tiny select button)” {presses button} “Please enable location data analytics to continue”
I think there’s already a way to forward Google Home requests directly to ChatGPT, I might be wrong though.
That wouldn’t surprise me. I think there’s a Siri shortcut for integrating with ChatGPT. It’s not the most elegant of solutions but it works well enough. I’m quite sure that this year we’ll see whatever Google and Apple has cooked up in terms of machine learning integration into the operating systems. Likely a flagship feature of the new Pixel phones, and definitely a significant Siri update on iPhone, probably along with some gimmicky feature to sell the new 16 Pros.
At that point, who is going to care about these devices?
In addition to being able to run the exact same thing on that phone you already have, too.
Their device does not have any specific hardware for their usage. Even if Google and Apple don’t bring any improvement to their own solution, soon enough someone is bound to just provide an “assistant AI app” with a subscription, proxying openai requests and using the touchscreen, camera, micro and speaker that are already there instead of making you buy a new set of those.
The “AI” in the R1 is utter shit. Wired eviscerated it in a review.
https://www.wired.com/review/rabbit-r1/
It is somewhat OK considering it’s a free app.
You could say the same about Siri, which is also utter shit.
And yet, for both you are supposed to pay for an overpriced device. You can at least pirate the R1 app.
I think there may be a market for an LMM that is executed locally and privately incorporates personal data.
Yes, there is. And yes, it would be huge. I know a lot of people that are staying away from all this as long as the privacy issues are not resolved (there are other issues, but at this point, the cat is out of the bag).
But running large models locally requires a ton of resource. It may become a reality in the future, but in the meantime allowing more, smaller provider to provide a service (and a self-hosted option, for corporation/enthusiasts) is way better in term of resources usage. And it’s already a thing; what needs work now is improving UI and integrations.
In fact, very far from the “impressive” world of generated text and pictures, using LLM and integrations (or whatever it is called) to create a sort of documentation index that you can query with natural language is a very interesting tool that can be useful for a lot of people, both individual and in corporate environment. And some projects are already looking that way.
I’m not holding my breath for portable, good, customized large models (if only for the economics of energy consumption) but moving away from “everything goes to a third party service provider” is a great goal.