Oddbean new post about | logout
 Current best open source alternative to chatgpt? (paid is fine, though I'm sure there are a ton of nuances to consider)

#asknostr 
 I Iike venice.ai  
 Looks like an interesting option, thanks. 
 it is, quite complete, but always double check with a second option, I have tested Venice ai extensively and it does a fair good job but it will give you often incorrect information like broken links, askjeves is decent as well 
 your own brain 
 Lol not smart enough 😄

Seriously though, these models are fairly dumb and even somewhat dangerous, as they always sound extremely eloquent and confident, even when wrong. But they can take away a lot of boring work! 
 I agree with the first half, yes!

hard to find examples for the second though but they probably exist  
 Go self hosted 
 Def leaning that way, though it's obviously more work. 
 Yeah and you need a decent graphics card, but there's no censorship, and all you pay for is electricity.   I just download other folk's ComfyUI workflows.  
 Right, just ran a model on my gaming PC (Windows but has nvidia gpu) vs on a laptop without, and, lol. 
 https://unleashed.chat 
 Oh nice. Will def take a look. 
 LLaMa https://www.llama.com 
 Right. Don't think i want to deploy it myself though? 
 Check out duck duck go AI 
 Hands down. 

https://ollama.com 
 Interesting. How does one use it? 
 It's very straightforward. Think of it as creating a common 'bus' to connect LLMs to daemon/service running on your system. `pull` models, `run` models, interact with them via CLI, GUI. 
 You can use it as chat with “ollama run [modelname]” but it also has a systemd service and runs as a REST api and you can build anything with it you’d build with the cloud api’s,

 CodeCompanion.nvim is a good example that provides functionality similar to Copilot in neovim locally.

One gotcha with ollama is parameter size, it downloads the Q4_0 quantization of a model by default when you don’t ask for any, and today it’s generally the sentiment that there are better quantizations at the same size, and that for many of the small models Q4_0 quantization renders them useless. A good middle ground value is Q6_K, you can figure out how to pick particular ones from the ollama website’s model index.

Models to try in the size that fits you are llama3.2, gemma2, mistral-nemo, and qwen2.5 
 try pairing ollama with https://github.com/open-webui/open-webui 👌 
 ppq.ai is very convenient 
 Nice option, thanks! Best aspect is LN pay as you go. 
 yeah.
not needing an account
pay with Monero as you go
select the model you want 👍

a good first AI experience i think 
 Jan.ai 
 Looks like a great option, thanks! 
 I'm curious, did you compare this with lmstudio.ai ? 
 Nope. 
 Jan.ai with a PPQ API key. 😉  
 I assume you're aware of https://chat.bitcoinsearch.xyz/

It's not general purpose but a great dev resource. 
 Interesting search tool, thanks! 
 Got a lot of good answers here, thanks.

My (I'm sure not even a tiny bit novel) summary of playing with current gen LLM tools is that they're likely very useful in three main situations:

One is when your task is more synthetic/creative: writing a letter, generating ad copy or creating a game, story etc etc where you need some flow of interesting human concepts (written well where that matters). I haven't tried any image stuff only text but i guess it'll be very useful there too now, since it's getting better. To be clear, I'm only guessing about this use case. Maybe creative people will find it useless, but seems unlikely.

The other is where you're doing something and aren't yet expert: an example might be learning a foreign language at intermediate level; it'll be a great practice tool. Or very similarly, learning a computer language or framework but you're not fully comfortable in it yet. The problem is that when you are expert in a technical thing, especially a less common one, then the kinds of questions you have are too difficult, and it will confidently give you answers that are *very* likely to be flat out wrong.

The third is the "donkey work" case: imagine having an intern who just graduated and remembers *all* the basics and theory but doesn't know much about the gotchas of real life in your field. You can give them tasks that would take time but are not easy to screw up. For example, writing unit tests for code (it dies seem like they are *particularly* good at software, unsurprisingly). You might still need to review the work to double check it but you saved a bunch of your time. I guess this is going to be the *main* use case until/unless AI has another quantum leap.



nostr:nevent1qqsghq5lypcd2xlfzlxur6xzv944r5rafapm6xd0k93zulygn9pg50cpz4mhxue69uhhyetvv9ujuerpd46hxtnfduhsygr8twz0ua0zz64eglr58rh9r898wafhdh0stkklhf3830gp9cwh9qpsgqqqqqqs6kleh9