• 0 Posts
  • 20 Comments
Joined 1 year ago
cake
Cake day: July 15th, 2023

help-circle


  • Liquor Bottle by Herbal T. Has a nice faux-upbeat rhythm with jazzy kinda beats, but lyrics.are dark. Definitely helps me keep a sane face on the dark days:

    And that’s why / I keep a

    A liquor bottle in the freezer ♪

    In case I gotta take it out ♫

    Mix me a drink

    To help me

    Forget all the things

    In my life that I worry about ♪ ♫


  • Yes? I think that depends on your specific definition and requirements of a turing machine, but I think it’s fair to compare the almagomation of cells that is me to the “AI” LLM programs of today.

    While I do think that the complexity of input, output, and “memory” of LLM AI’s is limited in current iterations (and thus makes it feel like a far comparison to “human” intelligence), I do think the underlying process is fundamentally comparable.

    The things that make me “intelligent” are just a robust set of memories, lessons, and habits that allow me to assimilate new information and experiences in a way that makes sense to (most of) the people around me. (This is abstracting away that this process is largely governed by chemical reactions, but considering consciousness appears to be just a particularly complicated chemistry problem reinforces the point I’m trying to make, I think).


  • and exercise caution when you’re unsure

    I don’t think that fully encapsulates a counter point, but I think that has the beginnings of a solid counter point to the argument I’ve laid out above (again, it’s not one I actually devised, just one that really put me on my heels).

    The ability to recognize when it’s out of its depth does not appear to be something modern “AI” can handle.

    As I chew on it, I can’t help but wonder what it would take to have AI recognize that. It doesn’t feel like it should be difficult to have a series of nodes along the information processing matrix to track “confidence levels”. Though, I suppose that’s kind of what is happening when the creators of these projects try to keep their projects from processing controversial topics. It’s my understanding those instances act as something of a short circuit where (if you will) when confidence “that I’m allowed to walk about this” drops below a certain level, the AI will spit out a canned response vs actually attempting to process input against the model.

    The above is intended ad more a brain dump than a coherent argument. You’ve given me something to chew on, and for that I thank you!


  • I have to say no, I can’t.

    The best decision I could make is a guess based on the logic I’ve determined from my own experiences that I would then compare and contrast to the current input.

    I will say that “current input” for humans seems to be more broad than what is achievable for AI and the underlying mechanism that lets us assemble our training set (read as: past experiences) into useful and usable models appears to be more robust than current tech, but to the best of my ability to explain it, this appears to be a comparable operation to what is happening with the current iterations of LLM/AI.

    Ninjaedit: spelling



  • For the record comp sci major here.

    So I understand all that but my counter point: can we prove by empirical measure that humans operate in a way that is significantly different? (If there is, I would love to know because I was cornered by a similar talking point when making a similar argument some weeks ago)