• 29 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: July 28th, 2023

help-circle









  • I’m a masochist, so I usually do “New”. Lemmy is small enough that I can usually get through most of the new posts in a reasonable amount of time.

    That said, if I want to a bit chiller experience, I will use “Scaled” which sometimes bubbles up something I might have missed.

    Finally, I will use “Active” if I’m really bored and what to see what most people are engaged with… but that is pretty rare.
















  • I think this is the author being humble. jmmv is a long time NetBSD and FreeBSD contributor (tmpfs, ATF, pkg_comp), has worked as a SRE at Google, and has been a developer on projects such as Bazel (build infrastructure). They probably know a thing or two about performance.

    Regarding the overall point of the blog, I agree with jmmv. Big O is a measure of efficiency at scale, not a measure of performance.

    As someone who teaches Data Structures and Systems Programming courses, I demonstrate this to students early on by showing them multiple solutions to a problem such as how to detect duplicates in a stream of input. After analyzing the time and space complexities of the different solutions, we run it the programs and measure the time. It turns out that the O(nlogn) version using sorting can beat out the O(n) version due to cache locality and how memory actually works.

    Big O is a useful tool, but it doesn’t directly translate to performance. Understanding how systems work is a lot more useful and important if you really care about optimization and performance.





  • Contributing immortal objects into Python introduces true immutability guarantees for the first time ever. It helps objects bypass both reference counts and garbage collection checks. This means that we can now share immortal objects across threads without requiring the GIL to provide thread safety.

    This is actually really cool. In general, if you can make things immutable or avoid state, then that will help you structure things concurrently. With immortal objects you now can guarantee that immutability without costly locks. It will be interesting to see what the final round of benchmarks are when this is fully implemented.






  • No, but basically jmp.chat takes over your phone number… it acts as your carrier for voice and SMS (similar to Google Voice). Maybe not exactly what you want.

    From the FAQ:

    You can use JMP to communicate with your contacts without them changing anything on their end, just like with any other telephone provider. JMP works wherever you have an Internet connection. JMP can be used alongside, or instead of, a traditional wireless carrier subscription.

    The benefit of this is that you can receive voice and text on anything that can serve as a XMPP client.






  • It will depend on the nature of how the threaded code is structured (how much is sequential, how much is paralle, Amdahl’s law, etc), but it should at least be more effective at scaling up and taking advantage of multiple cores.

    That said, the change would come at a cost to single threaded code. From the PEP 703:

    The changes proposed in the PEP will increase execution overhead for --disable-gil builds compared to Python builds with the GIL. In other words, it will have slower single-threaded performance. There are some possible optimizations to reduce execution overhead, especially for --disable-gil builds that only use a single thread. These may be worthwhile if a longer term goal is to have a single build mode, but the choice of optimizations and their trade-offs remain an open issue.