Now Reading
Ask HN: What AI assistants are already bundled for Linux?

Ask HN: What AI assistants are already bundled for Linux?

2024-03-01 04:38:31

One thing about LLMs is that they are 6GB+ (and much larger for “smart” ones) just sitting in the background. They suck power and produce heat like nothing else, and they are finicky, especially at smaller sizes.

Running one as a background desktop assistant is whole different animal than calling a Microsoft API.

At least with a GPU that can do power save that’s not the case. I have a box with some 3090’s in it, each card will idle <50W when it’s not doing inference with the weights loaded into VRAM. Only when I ask it to do inference it will spin up and start consuming 300-400W.

> One thing about LLMs is that they are 6GB+ (and much larger for “smart” ones) just sitting in the background. They suck power and produce heat like nothing else, and

Huh? That’s not at all true. It’s only using processing power (CPU) while it actually generates text, otherwise it sits and wait. Although yes it occupies memory (RAM or VRAM) if you don’t unload it, but you can configure it to startup when you need it, and shut down when you don’t.

Do anyone actually use the CPU for anything besides testing? Last time I tried it, it was horribly slow compared to GPU that there wasn’t really any point to use for me, besides getting access to more memory.

Some of us like to experiment with new technology but don’t physically own the kind of a hardware that is ideal for it. So yes, I’ve actually gotten passable results running on CPU (on a 2019 laptop at that)

I have.

On a Mac Studio with NixOS based Asahi Linux and 128Gb of RAM, mixtral 8x7b uses 49GB of RAM. At the same time I load airflow tasks that deal with world wide datasets (using ~60GB on 16 parallel streams with the performance cores) format is parquet and also mmaped.

Computer still has 8 efficiency cores and the whole GPU for visualizing the maps using lonboard / browsing / etc.

The computer uses 8-10W when idle, ~100W when running jobs or actively using the LLM and around ~200W when really using the GPU.

This makes it very efficient energy wise in my book compared to the beast of keeping a modern CPU and nvidia GPU on when idle. My electricity bill is unaffected.

None. Microsoft has Copilot in preview mode in Windows and it’s not very integrated apart from a chat window. I doubt GNOME/KDE will be able to dedicate enough resources to adding an assistant that is well integrated with the desktop environment any time soon.

A search in Fedora yields a single GSoC project[0] limited in scope to NetworkManager and it’s not clear if anyone actually is working on that.

If the use case you’re interested in is actually having the LLM doing things for you in SaaS applications, that wouldn’t need deep integration but, considering Google is yet to deliver a Google Drive client for Linux, I wouldn’t hold my breath waiting for a native Linux AI-assisted assistant.

Your best option right now is to interface with the assistants through their web interface and hope they have plugins/extensions to interact with things you want.

Other than that, some people have built prototypes running LLMs locally that talk to things like Home Assistant. But again, no deep desktop integration.

0 – https://docs.fedoraproject.org/en-US/mentored-projects/gsoc/…

Given the fact that one can control damned near everything over command line in linux, and command line is a much more stable interface than a gui, I’d guess that there’s a great deal more potential for assistants in linux than windows.

The other day I wanted to figure out how to turn my dock red if I dropped the vpn in gnome. I found the file that controlled my wireguard gnome shell extension and with the help of gpt3.5 and some very rudimentary js knowledge (I’m a backend dev, don’t hate me), I was able to add a js function to toggle the color on vpn up / down events. This didn’t even take me an hour to do and I’d never even thought to try it before GPT.

Sure, things are janky now, but the future potential of LLMs with linux and OSS is huge.

“I wouldn’t hold my breath waiting for a native Linux AI-assisted assistant”

A simple chat window and a automated script to install a existing small modell should be doable, but sounds not very exciting to me.

But mid term, having a locally run LLM and integrated into the OS that scans my files and can summarize folders for me, would be nice. I have big folders with mixed stuff, AI would be nice to sort that. I do believe some people are working on something like this, but the bulk of it is not OS specific. And not OSS.

Not OP, but when searching for files, spelling something wrong, or using the wrong synonym is a big problem. We’re just used to computers being inflexible.

> I wouldn’t hold my breath waiting for a native Linux AI-assisted assistant.

On Mac when I press Command + Space, it brings up Spotlight search

That can’t easily be added to be the equivalent of some kind of LLM prompt on GNOME/KDE/XFCE?

I don’t quite know what you’d ask it/do with it that would be of much value? Seems like a quicker way/a wrapper around either asking an LLM questions via CLI or basically Electron wrapping HTML (like this https://github.com/lencx/ChatGPT)?

> That can’t easily be added to be the equivalent of some kind of LLM prompt on GNOME/KDE/XFCE?

Both GNOME and KDE have that already. Shouldn’t be too hard to implement what you’re thinking if the APIs/services are available.

Unrelated, but is there something like Bonzi Buddy for linux? Not the spyware part, just the friendly looking clippy-esque character that can tell you about your new e-mails, weather, or whatever? I kind of wish I had something like that.

pip install llm # among other, to run local or not. Yet, KDE or Gnome are yet to integrate or develop a nice API for/ to any of these.

Why? They take lots of space and lots of computing power. Linux has always been about lightweight and a bundle only containing essential things. You can always install one if you need it but as it stands right now LLMs are not useful enough to warrant their bundling in a distro. Just my 2 cents

Source Link

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
View Comments (0)

Leave a Reply

Your email address will not be published.

2022 Blinking Robots.
WordPress by Doejo

Scroll To Top