• 0 Posts
  • 4 Comments
Joined 1 year ago
cake
Cake day: August 2nd, 2023

help-circle
  • The whole “it’s just autocomplete” is just a comforting mantra. A sufficiently advanced autocomplete is indistinguishable from intelligence. LLMs provably have a world model, just like humans do. They build that model by experiencing the universe via the medium of human-generated text, which is much more limited than human sensory input, but has allowed for some very surprising behavior already.

    We’re not seeing diminishing returns yet, and in fact we’re going to see some interesting stuff happen as we start hooking up sensors and cameras as direct input, instead of these models building their world model indirectly through purely text. Let’s see what happens in 5 years or so before saying that there’s any diminishing returns.


  • Gary Marcus should be disregarded because he’s emotionally invested in The Bitter Lesson being wrong. He really wants LLMs to not be as good as they already are. He’ll find some interesting research about “here’s a limitation that we found” and turn that into “LLMS BTFO IT’S SO OVER”.

    The research is interesting for helping improve LLMs, but that’s the extent of it. I would not be worried about the limitations the paper found for a number of reasons:

    • There doesn’t seem to be any reason to believe that there’s a ceiling on scaling up
    • LLM’s reasoning abilities improve with scale (notice that the example they use for kiwis they included the answers from o1-mini and llama3-8B, which are much smaller models with much more limited capabilities. GPT-4o got the problem correct when I tested it, without any special prompting techniques or anything)
    • Techniques such as RAG and Chain of Thought help immensely on many problems
    • Basic prompting techniques help, like “Make sure you evaluate the question to ignore extraneous information, and make sure it’s not a trick question”
    • LLMs are smart enough to use tools. They can go “Hey, this looks like a math problem, I’ll use a calculator”, just like a human would
    • There’s a lot of research happening very quickly here. For example, LLMs improve at math when you use a different tokenization method, because it changes how the model “sees” the problem

    Until we hit a wall and really can’t find a way around it for several years, this sort of research falls into the “huh, interesting” territory for anybody that isn’t a researcher.



  • If you’re writing code that generic, why wouldn’t you want str to be passed in? For example, Counter('hello') is perfectly valid and useful. OTOH, average_length('hello') would always be 1 and not be useful. OTOOH, maybe there’s a valid reason for someone to do that. If I’ve got a list of items of various types and want to find the highest average length, I’d want to do max(map(average_length, items)) and not have that blow up just because there’s a string in there that I know will have an average length of 1.

    So this all depends on the specifics of the function you’re writing at the time. If you’re really sure that someone shouldn’t be passing in a str, I’d probably raise a ValueError or a warning, but only if you’re really sure. For the most part, I’d just use appropriate type hints and embrace the phrase “we’re all consenting adults here”.