If you decompile that code you won’t get lambdas. You get ifs. Because that is how the hardware is build. Ifs/ands/Ors that is what computing is built on. Everything else is flavor.
If you decompile that code you won’t get lambdas. You get ifs. Because that is how the hardware is build. Ifs/ands/Ors that is what computing is built on. Everything else is flavor.
The scope is irrelevant it’s a single function class as presented. It was a single method that they broke out “to hide the ifs”. Then they just used compiler specialties to remove the word ‘if’ from the assignments. The comparisons are still there and depending on the execution path those constants may not be so constant during runtime.
They’re still ifs. They’ve just been lambda’d and assigned to constants.
This doesn’t get rid of the if statements. It hides them.
Yes. Many of you are. I’m one of those technicals you speak of. I work with half a dozen devs that all think like you. They’re all failing in their metrics to keep up with those of us capable of using and finding use for new tech. Including AI’s. The others are being pushed out. As will most of those in here complaining. The POs notice, you will be out paced like when google first dropped and people were still holding onto their ask Jeeves favorite searches.
Yes. Some LLMs can do math. It’s a documented thing. Just because you’re unaware of it doesn’t mean it doesn’t exist.
Others may find this sacrilege but this is one the best uses for ai at the moment. I toss it methods and ask it to describe what’s happening each method. Then after you’ve gone through a whole class ask it to describe the whole class. If you break it up well it can very quickly document massive code sets specifically for both technical and non technical people. Even better it can take that same documentation and convert it to highly detailed and advanced markdowns for wikis. This will also help you review your code. If the ai is having an issues understanding what you’re doing you can bet anyone else dropping into it without backup is going to have issues too. Particularly PO’s, QA, Scrum masters and all those they meet with when you’re not there. It has saved me repeatedly showing up in meetings where those other non technicals just “ho hum” their way through meeting questions, come back asking where docs are because they usually don’t even bother looking. “No one else had anything documented and we didn’t see it so we just wondered.” “Yep here it is, here’s it broken down by class, here’s the method, here’s the variable types in and out, and here’s the quick overview levels 1,2,3,4 and 5.” All getting progressively more technical. My PO has thanked me repeatedly for saving her ass in meetings where they complain about lack of documentation but not from me.
Lemmy is full of AI luddites. You’ll not get a decent answer here. As for the other claims. They are not just next token generators anymore than you are when speaking.
There’s literally dozens of these white papers that everyone on here chooses to ignore. Am even better point being none of these people will ever be able to give you an objective measure from which to distinguish themselves from any existing LLM. They’ll never be able to give you points of measure that would separate them from parrots or ants but would exclude humans and not LLMs other than “it’s not human or biological” which is just fearful weak thought.
I almost exclusively use ai now. Customized responses, no ‘google it’ responses, no ads.
Describe how you ‘learned’ to speak. How do you know what word comes after the next. Until you can describe this process in a way that doesn’t make it ‘human’ or ‘biological’ only it’s no different. The only thing they can’t do is adjust their weights dynamically. But that’s a limitation we gave it not intrinsic to the system.