This really whips the llama’s ass
Mastodon: @greg@clar.ke
This really whips the llama’s ass
The sub’s implosion was probably Clippy’s fault
I was swimming with some buddies and we came across an egg
This is me partying with him a couple of months ago in Toronto. Just an awesome performer.
Oracle Sales Lawyers are the worst
I’m not sure about questions to ask but you should definitely wear a monocle
MicroPython is good for limited hardware projects.
Whoever filled the patent obviously didn’t consider the optics of this. I’m in the market for a new car and was considering a Ford. This will factor into my decision.
Facebook, I’ve been off it for 5ish years now. I miss some connections but I am much happier for it.
Was it a password reset email?
I’m surprised that Tailscale can’t get through, cleaver routing is one of Tailsacle’s features. Though I do sometimes have connection issues with Tailscale when running DNS-over-HTTPS on my laptop.
I have no useful advice but much sympathy.
It’s not an ideal solution but you can save your Google takeouts to Dropbox. It might be worth signing up for Dropbox for one month and use Dropbox’s sync features. I haven’t used Dropbox in years but they used to have quite solid syncing.
Quickly dip them in the sun (or any G-type star). The high heat will immediately carbonize the outside keeping the mozzarella contained (and also carbonized).
That’s exactly what a virus that was trying to trick me would say…
Fun fact, DEDSEC is a type of memory used in Soviet era mainframes.
That’s 128GB RAM, the GPU has 24GB VRAM. Ollama has gotten pretty smart with resource allocation. Smaller models can fit soley on my VRAM but I can still run larger models on RAM.
I’ve installed Ollama on my Gaming Rig (RTX4090 with 128GB ram), M3 MacBook Pro, and M2 MacBook Air. I’m running Open WebUI on my server which can connect to multiple Ollama instances. Open WebUI has it’s own Ollama compatible API which I use for projects. I’ll only boot up my gaming rig if I need to use larger models, otherwise the M3 MacBook Pro can handle most tasks.
I agree. Very few people in industry are claiming that LLMs will become AGI. The release of o1 demonstrates that even OpenAI are pivoting from pure LLM approaches. It was always going to be a framework approach that utilizes LLMs.