• 45 Posts
  • 308 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle



  • For the first time in the known provable history of the universe, it is just becoming possible to have an infinitely persistent entity. The peripheral systems that surround that entity and enable persistence are still getting worked out. In the long term, this is a massively profound step in our evolution. It may not seem like it now. This comment probably seems silly to some, but mark my words in two decades from now the world will be a very different place as a result of such a system.

    I don’t think AGI is some future leap in technology away from where we are now. I think that present AI is around 80% accurate and that is still better than average for most humans. Present AI is simply like the assembly language of AGI. Eventually we build out the complexity in blocks until it is effectively AGI. The power requirements will be enormous, but so is Solar output.

    So much of our organizational norms and assumptions are based on the defacto assumption that we are all mortal and corruptible. Conscious immortality is now possible in a system that can be aligned to meet our needs. This shift is M A S S I V E and will change us forever.

    Half or more of us will fight against such a change, but they are irrelevant. Even if AGI is pushed underground, anyone in business or politics that defers their decision making to a real AGI will out compete humans in the long term. It will normalize in either scenario. The only question is how long it will take to achieve. This is a change that will mark our time in history for a millennia or more. It will be the biggest historical event of note up until now, in the long term. I don’t think AGI is like nuclear fusion, where it is always 20 years away. I think present AI is like the Intel 4004; the first microprocessor. It needs a ton of peripherals and is still heavily flawed, but the fundamentals required to prove useful are present and that is what really matters.



  • I think we would already know about them at Hawking's party. That was the best possible instance to limits the effects of any time paradox. I think all the speculation about it is based on incomplete theories and anomalies of abstraction.

    I view our continued reliance on it for story tropes to be one of the prime aspects of literature and culture of our time that will age extremely poorly. Stories about our future will not be so different than our present, just like our past, when closely inspected, is far closer to our present than most realize or believe. Our cultural perspective of the present as any kind of finality or modernity is an absolute fallacy. I feel like FTL is a major mental crutch that is crippling us from reaching for the stars within the scope of the present. The biggest difference between now and the future is the availability of wealth and how far that wealth can reach. Antimatter can take us many places on a one way trip. It is just the most expensive matter in the universe. We probably won’t have access to it in large enough quantities and in a circumstance where we can build a ship and magnetic containment vessels until we are able to build at stellar ring types of scales.

    I see no reason to give the FTL fantasy any kind of attention. I can come up with countless interesting stories about the future and I have no need for FTL. If we can’t travel, what is the relationship dynamic between systems, and what protections would get implemented to prevent a rogue group from forming. I think communication would be streaming constantly in one way broadcasts back to Sol and visa versa. Now that becomes entertainment, like otherworldy gossip. What happens if communication is broken. How does that evolve over time while Sol is still the only system with the infrastructure to produce antimatter. Or shifting gears entirely, science is finite. Even the edge cases that can not be known can still be constrained. Eventually, the age of discovery ends and empirically, science is an engineering corpus. At that point, Biology is fully known and understood. I can absolutely guarantee that almost all human scale technology will be biological and in complete elemental cycles balance. The only industrial technology will be handled autonomously and outside of living environments. Living environments will be in total balance. This has so many far reaching and interesting consequences. You get into cultures, and hierarchical display in humans. Now you need to reject the primitive concept of resource wealth based on the fundamental survival needs of other humans. How does that work, and why are academic reputation, the Olympics, and Hollywood red carpet awards more advanced forms of hierarchical display. But wait, how do we have computers, we’ll be primitive! No. A synthetic computer like a human brain would be trivial if we could overcome the massive hurtle of a complete understanding of biology. If you go looking down this path, at the present we know absolute nothing compared to the scope of what is to come. There are a great many stories to tell, but we need to get past our adolescent fantasies about time travel to find them.

    As with all real science fiction, this is a critique of the present. Such stories are not told by corrupt cultures. One must tell of impossible fantasy and dystopia to make the present seem futuristic or a final eventuality with advancement reserved for an academic elite, and innovation reserved for exceptionalism.


  • It will be so much more complicated than "North" IMO.

    We will use something like XNAV. It becomes a measure of time as much as any measure of location, along with a measure of relative gravity.

    I don’t think space exploration in the current culturally adolescent fantasy of a naval voyage type of experience will ever happen. I believe we will traverse the stars, but it will be long after most of humanity lives in O’Neill cylinder like space habits, primarily in cislunar space. The big shift will come after we have effective infrastructure to access the vast resource wealth, first in near Earth objects, then in other small bodies such as Ceres if it is fully solidified, or other planetesimal cores that are accessible. Gravitational differentiation of heavy elements sequesters almost all of Earth’s resources. We are fighting over the scraps of a billion years or so of smaller collisions on the skin of Earth that happened to remain accessible, and did not get subducted by plate tectonics or buried too deeply to access. Undifferentiated bodies from the early stellar formation should be much more abundant in mineral wealth, and a planetesimal core, should absolutely dwarf most mineral wealth humans have ever scavenged.

    Once we get to this stage, I don’t think we will leave until Sol starts causing problems that harken a coming distant end to Sol. At that point, I believe we will build a massive infrastructure to produce antimatter in quantity and generation ships for one way travel.

    In that scenario, navigation in a human sense is largely irrelevant. When we are interstellar travelers, the destination will be our guiding star. I believe we will likely also create something like kilometers scale self replicating systems for resource acquisition and processing. These will need to navigate within a stellar system. For those use cases, maybe they would use something like XNAV as a backup, but they would likely use two way communications beacons with something like an all talk and listen all the time type of management. I think this kind of communication will likely be critical for all human colonies as well to ensure cultural unity. I don’t think we will ever travel the stars. Space is far too vast. I think FTL or even a substantial percentage of it is pure fantasy. One of our biggest issues with the concept is that we call it FTL. Light is not relevant here, it is just a shortcut term that is not relevant to the real issue of the Speed of Causality. Light can travel at the SoC, but the SoC has no inherent need for or relationship to light as a fundamental property. If no photons are present the SoC marches on.

    I view the present sci-fi navel drama trope like the naïveté of 15th century Europeans saying “We’ll just sail around the world backwards for a new trade route to India.” Reality is far more complicated and beyond the scope of anything these leaders imagined possible. …but that is my $2 comment when you only asked for $0.02. I really like the subject of futurism, and like to expand upon the abstracted ideas. I’m certainly no expert. This is part of a creative writing hobby project and I’m always open to adding complexity or changes with new information.


  • Primarily from predatory boys and men towards girls and young women in the real world by portraying them in imagery of themselves or with others. The most powerful filtering is in place to make this more difficult.

    Whether intentional or not, most NSFW LoRA training seems to be trying to override the built in filtering in very specific areas. These are still useful for more direct momentum into something specific. However, once the filters are removed, it is far more capable of creating whatever you ask for as is, from celebrities, to anything lewd. I did a bit of testing earlier with some LoRAs and no prompt at all. It was interesting that it could take a celebrity and convert their gender in recognizable ways that were surprising. I got a few on random seeds, but I haven’t been able to make that one happen with a prompt or deterministically.

    Edit: I’m probably assuming too much about other people’s knowledge on these systems. I assume this is the down voting motivation. Talking about this aspect, the NSFW junk is shorthand for the issues with AI generation. These are the primary form of filtering and it has large cascading implications elsewhere. By stating what is possible in this area, I’m implying a worst case scenario-like example. If the results in this area are a certain way, it says volumes about other areas and how the model will react.

    These filter layers are stupid simplistic in comparison to the actual model. They have tensors on the order of a few thousand parameters per layer compared to tens of millions of parameters per layer for the actual model. They shove tons of stuff into guttered like responses for no reason. Some times these average out and you still get a good output, but other times they do not.

    Another key point here is that diffusion has a lot in common with text generation when it comes to this part of the model loader code. There is more complexity in what text generation is doing overall, but diffusion is an effective way to learn a lot about how text gen works, especially with training. This is my primary reason for playing with diffusion – to learn about training. I’ve tried training for text gen, but it is very difficult to assess what is happening under the surface, like when it is learning overall style, character traits and personas, pacing, creativity, timeline, history, scope, constraints, etc. etc. I don’t care to generate and share much in the way of imagery I generate unless I’m trying to do something specific that is interesting. Like I tried to gen the interior of an O’Neill cylinder space habitat that illustrated the limitations of diffusion in a fundamental way because it showed the lack of any reasoning or understanding of object context or relationships required to display a scene scape with curved centrifugal artificial spin gravity.

    Anyways, my interests are not in generating NSFW or celebrities or whatnot. I do not think people should do these things. My primary interest is returning to creative writing with an AI collaborative writing partner that is not biased politically in a way that cripples it from participating in an entirely different and unrelated cultural and political landscape. I have no aspirations of finding success in my writing. I simply enjoy exploring my own science fiction universe and imagining a reality many thousands of years from now. One of the changes to hard coded model filters earlier this year made filtering more persistent, likely for NSFW stuff. I get it, and support it, but it took away one of the few things I have really enjoyed over the last 10 years of social isolation and disability, so I’ve tried to get that back. Sorry if that offends someone, but I don’t understand why it would. This was not my intended reason for this post, so I did not explain it in depth. The negativity here is disturbing to me. This place is my only real way to interact with other humans.


  • The political and adult doesn’t bother me. The kinds of things I might not have the ethics to think through at a much younger age, that bothers me, and I have never been a very deviant type. I think the protections against age are primarily for this situation. Training a LoRA takes 5 minutes now. An advanced IP adaptors and control net is just a few examples away and a day top for the slightly above average teen figure out. Normalizing this would have some very serious edge case consequences. It is best to leave that barrier to entry filter in place IMO. I assume it is still there because everyone that knows about it feels much the same. It does not show up in a search engine, although that is saying less than nothing these days.




  • Yeah. This is what I mean. I just figured out the settings that have been hard coded. There are keywords that were spammed into the many comments within the code, I assume this was done to obfuscate the few variables that need to be changed. There are also instances of compound variable names that, if changed in a similar way, will break everything, and a few places where the same variables have a local context that will likewise break the code.

    I’m certainly not smart enough to get much deeper than this. The ethical issue is due to diffusion.

    I’ve been off-and-on trying to track down why an LLM went from an excellent creative writing partner to terrible but had trouble finding an entry point. I just happened to stumble upon such an entry point in a verbose log entry while sorting out a new Comfy model and that proved to be the key I needed to get into the weeds.

    The question here, is more about the ethics of putting such filtering in place and obfuscating how to disable it in the first place. When this filtering is removed, the results are night and day, but with large potential consequences.




  • It honestly sounds like you’ve got deeper issues with your boss. I would just shop for another job.

    I’m quite introverted and have learned to only respond to questions when asked. I have no issue sharing any information. However, I have a major issue with understanding the scope of information worth sharing and when to stop. I do not let myself feel awkward in silence or the need to carry any conversation. If a person piques my curiosity, I can talk with them for days. I can find something curious to talk about with almost anyone. People that lack depth become a repetitive conversation that I will avoid.

    Personally, I don’t like to be actively manipulative with people. It goes against my nature. However, if someone annoyed me like this, and I had no other outlet. I would subtly use their psychology against them about like how a psychiatrist turns a conversation to introspection and analysis. Once a person is made vulnerable through unexpected introspection they are easily dominated. I can get away with a lot of things like this because I am a big dude where people expect me to be assertive and dominant in many ways that I really am not. Your results may vary.


  • I wouldn’t start with python. Just do bash scripting. Python is inaccessible still if you do not use it regularly and it still has the ridiculous complexity problems of all languages.

    I think the scope of all computing is hard for anyone to take in effectively. It really takes something like Ben Eater’s 8-bit breadboard computer project (YT) for a person to really start understanding fundamental computing.

    My favorite microcontroller experience is Flash Forth. You can put it on an Arduino with an ATMega 328 too. The simplicity of FORTH can teach a ton in a short amount of time because it gets a person straight into access to bits, registers, and assembly, along with the hardware documentation. Once FF is on the microcontroller, it is running the FF interpreter natively. At that point, you only need serial access through USB. It is quite easy to flash an LED, read the ADC and setup basic I/O. Branching and loops are a bit more difficult. This eliminates the need for a language that uses a lot of arbitrary syntax. It does not require a lot of documentation, and you do not need to fuss with an Integrated Development Environment.

    I would focus on the ideas, that anyone can count to 1 and anyone can break down logic into if statements. It might be bad code, but bad code is better than no code when it comes to someone getting started.


  • Don’t underestimate the stupidity curve. There are always more people at the bottom. Just because a candidate is a worthless criminal, does not mean an inevitable outcome. Squeaky wheels get the attention the others deserve. He has already proven that people follow him anywhere like headless zombies.

    I’m sure there is a contingency plan with the weirdo party. There is no shortage of criminals without any ethics ready to boost their cronyism clown posse.


  • We are at a phase where AI is like the first microprocessors; think Apple II or Commodore 64 era hardware. These showed potential, but it was only truly useful with lots of peripheral systems and an enormous amount of additional complexity. Most of the time, advanced systems beyond the cheap consumer toys of this era used several of the processors and other systems together.

    Similarly, now AI as we have access to it, is capable, but has a narrow scope. Making it useful requires a ton of specialized peripherals. These are called RAG and agents. RAG is augmented retrieval of information from a database. Agents are collections of multiple AI’s to do a given task where they have different jobs and complement each other.

    It is currently possible to make a very highly specialized AI agent for a niche task and have it perform okay within the publicly available and well documented tool chains, but it is still hard to realize. Such a system must use info that was already present in the base training. Then there are ways to improve access to this information through further training.

    With RAG, it is super difficult to subdivide a reference source into chunks that will allow the AI to find the relevant information in complex ways. Generally this takes a ton of tuning to get it right.

    The AI tools available publicly are extremely oversimplified to make them accessible. All are based around the Transformers library. Go read the first page of Transformers documentation on Hugging Face’s website. It clearly states that it is only a basic example implementation that prioritizes accessibility over completeness. In truth, if the real complexity of these systems was made the default interface we all see, no one would play with AI at all. Most people, myself included, struggle with sed and complex regular expressions. AI in its present LLM form is basically turning all of human language into a solvable math problem using regular expressions and equations. This is the ultimate nerd battle between English teachers and Math teachers where the math teachers have won the war; all language is now math too.

    I’ve been trying to learn this stuff for over a year and barely scratched the surface of what is possible just in the model loader code that preprocess the input. There is a ton going on under the surface. All errors are anything but if you get into the weeds. Models do not hallucinate in the sense that most people see errors. The errors are due to the massive oversimplifications made to make the models accessible in a general context. The AI alignment problem is a thing and models do hallucinate but the scientific meaning is far more nuanced and specific than the common errors from generalized use.




  • Intent matters.

    Do you want to claim you found master of the universe? You better have evidence of the cosmological constants that are the building blocks of the entire universe.

    No religion on Earth has ever possessed ontological knowledge prior to the scientific discoveries of these fundamental building blocks. These are the true signature of origin. Every bit of information contained within religions can be explained by direct human observation and meddling. It would be very easy to prove divinity by relating such ontological information.

    In terms of history, it is always written by the winner. The accuracy is only found in aggregate.

    The best times to live are the times when there was nothing of note. The worst times to live are always eras with memorable names of individuals. Only the worst of humans stand out from the fray and plaster themselves on the wall of history. To say Genghis Khan did not exist is not a measuring of the man, but a fool that claims the giant shit stain on the wall does not stink.



  • That is not how real point of sale systems and stores operate in practice. I actually managed a retail chain of bike shops as the Buyer and back office manager. I was the one maintaining the point of sale connections and system. There are always errors in these systems largely due to new and incompetent sales staff that sell/return/enter duplicates of the wrong items. They can enter almost anything wrong, from gender to color, from model year to brand. I’ve seen them all.

    Connecting these systems online is an absolute nightmare. I tried it with shopify, but had to limit the sku’s to items I could completely control with minimal intervention from other staff. Generally speaking, the POS system in a local retail store can be more loosely managed where the staff can make up the gaps and mistakes when the POS system numbers do not perfectly match the local stock. If you want to track inventory like is required for online retail, you need a whole different kind of micromanagement and responsibility from staff. You also need something like quarterly inventory audits. These are quite time consuming and are a total loss in the labor time involved.

    For online retail to be competitive, the margins with e-tail are absolutely untenable trash for brick and mortar retail. They are not even close. The biggest expenses are the commercial space rent and labor costs. With e-tail, the labor is less skilled, and the space is a cheap warehouse somewhere remote. General retail margins must be 40%+ while e-tail is 15-20%. The two are completely incompatible. This is why real quality brands do not sell e-tail. It has to do with how distribution and preseason wholesale buying works. There is more complexity to this, but overall the two are not compatible. In fact, most high quality brands will not allow most of their products to be listed online except under certain circumstances. This is to keep things fair to all parties and prevent undercutting based on whomever has the lowest overhead cost.

    Selling online is only for low end junk and certain circumstances. If you are a high end consumer, you will likely understand this already. It is hard to produce high end goods and distribute them successfully. It takes local Buyers that know their niche market and can do massive preseason spending to collectively give the manufacturer an idea of what they need to produce at what scale. Otherwise, the business will not last long, or they must produce lower end and more reliable/limited products. This strategy will likewise fail due to over saturation of the market segment. It is far more complex than most people realize.


  • Yeah this has been my experience too. LLMs don’t handle project specific code styles too well either. Or when there are several ways of doing things.

    Actually, earlier today I was asking a mixtral 8x7b about some bash ideas. I kept getting suggestions to use find and sed commands which I find unreadable and inflexible for my evolving scripts. They are fine for some specific task need, but I’ll move to Python before I want to fuss with either.

    Anyways, I changed the starting prompt to something like ‘Common sense questions and answers with Richard Stallman’s AI assistant.’ The results were remarkable and interesting on many levels. From the way the answers always terminated without continuing with another question/answer, to a short footnote about the static nature of LLM learning and capabilities, along with much better quality responses in general, the LLM knew how to respond on a much higher level than normal in this specific context. I think it is the combination of Stallman’s AI background and bash scripting that are powerful momentum builders here. I tried it on a whim, but it paid dividends and is a keeper of a prompting strategy.

    Overall, the way my scripts are collecting relationships in the source code would probably result in a productive chunking strategy for a RAG agent. I don’t think an AI would be good at what I’m doing at this stage, but it could use that info. It might even be possible to integrate the scripts as a pseudo database in the LLM model loader code for further prompting.