Orbit is an LLM addon/extension for Firefox that runs on the Mistral 7B model. It can summarize a given webpage, YouTube videos and so on. You can ask it questions about stuff that’s on the page. It is very privacy friendly and does not require any account to sign up.
I personally tried it, and found it to be incredibly useful! I think this is going to be one of my long term addons along with uBlock Origin, Decentraleyes and so on. I would highly recommend checking this out!
Most important part of the thread:
In it’s beta stage, Orbit is currently not open-source. This doesn’t mean it will remain this way forever. If orbit gains traction and we have the resources and funding to support an Open-Source project, I’m sure things could change.
Press X to doubt.
No need for doubt. There is no technology in place that would guarantee any kind of privacy.
For the current version, we are using a Mistral LLM (Mistral 7B) hosted within Mozilla’s GCP instance.
Has Mozilla done sometime to deserve this skepticism? They were founded on open-source and AFAIK have continued to support open-source. Mozilla is far from a perfect organization, but if this project was a success I think it would be out of character for them to keep it closed-source.
then why make it closed source to begin with?
Eh, skepticism should be the default.
But I agree with you, nothing they’ve done is inherently bad, though they’ve done some abysmally stupid things in the way they handle them.
But I also really wish they’d stop fucking around with half-assed things like this and focus on core utilities.
It is very privacy friendly […]
What makes you believe that? The most information I could find about this is that it doesn’t “save your session data.” The Orbit privacy policy also seems a bit bare, and I can’t decide if that’s a good thing or not.
Either way, you’re still sending data to a third party service to process. Might be worth it for some people.
A 7B model can run on a GPU with just 6GB VRAM, provided the GPU has proper compression storage, which is every gpu named nVidia something.
If the AI assistant runs locally, this is great. If it uses Cloud, welll, that’s going to cost money somehow.
Probably not for me as I’m not interested in a summarizing tool, but I’m not against AI in general.
OAN, I think over time, the community will see that AI was a bubble, but in the same way that the internet was a bubble back in the day.
OAN, I think over time, the community will see that AI was a bubble, but in the same way that the internet was a bubble back in the day.
Surprised to see this opinion on Lemmy haha. Yep, totally agree with ya here!
If you really care for an LLM, run it locally… Not sure if this does it…