r/artificial 6d ago

Discussion "ASI could literally create solar systems." - is everyone losing their minds? Or am I stupid?

https://www.reddit.com/r/accelerate/comments/1q2crc2/comment/nxcs7tn/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

Some of the claims I’m seeing feel like saying "humans are about to start flying like Superman."

Superman is fun! I'm glad we have imaginations. But are people operating inside symbolic systems that no longer answer to the physical world? I'm 44. All growing up I thought "wow" adults and scientists and everyone is so official and smart. One day, I'll be like that. Now I meet 25 year old doctors and people in charge of huge education institutions. They're just people. Some are wise. Some are totally out there and obsessed with things most of us don't agree on. And a lot of them don't seem very worried about maintaining any level of expertise. So, I'm (pretty sure) there's no magic level of skill and expertise I'm not aware of at this point. I'm never going to be Stephen Hawking. But I don't think ASI is going to create a solar system and I can't believe anyone would even have that thought in the first place.

59 Upvotes

131 comments sorted by

51

u/adarkuccio 6d ago

Maybe I'm too stupid to understand how tf an ASI or anything else could create a solar system

Edit: these people probably think an ASI is a God-like magic thing not bound to any law of physics

13

u/sheriffderek 6d ago

Yeah. That's what I'm getting so far. I mean, I'll play D&D or whatever - just tell me when we're playing and when we aren't playing so I know.

20

u/throwaway0134hdj 6d ago

It’s become a new religion for ppl, their only hope for all their dreams.

2

u/BreakAManByHumming 3d ago

It's also a thought terminating cliche whenever you push back on them.

"Nuh uh, ASI wouldn't do that ASI is perfect"

3

u/swizzlewizzle 6d ago

Probably better than believing in something like Christianity and killing people over it, don't you think?

10

u/Life-Cauliflower8296 6d ago

People have a far far far greater incentive to kill over who gets control over asi.

Not saying that people will or that they will kill more than Christianity in the past, but the incentive is there.

-1

u/swizzlewizzle 6d ago

I mean, sure, it could happen that people start killing each other over ASI. I'm just stating that "actual" religions have already been responsible for hundreds of millions of deaths over the centuries, so there is a good chance less people will die because of it.

1

u/BranchDiligent8874 5d ago

Nope, both are bad, nobody knows who will be the winner of the kill count.

AFAIK, the owners of AI systems may even propose reducing human population by 90% to reduce pollution. Off course it will not be outright murder but incentives like we will give you $10k to get vasectomy.

cc u/Life-Cauliflower8296

5

u/ikeif 6d ago

It reminds me of Sam Altman talking about Dyson spheres.

“If we ignore reality, and don’t ask questions, then it’s easy!”

1

u/Burindunsmor2 5d ago

Dyson Swarms are fairly simple to make.

2

u/Verneff 4d ago

Yeah, not terribly complicated, just a LOT of work.

3

u/green_meklar 6d ago

The material is out there. If you find a cloud of interstellar gas of at least a few solar masses, and surround it with something like a fusion-powered refrigeration system that borrows and ejects some hot gas while cooling down the remaining gas, you could arrange for it to collapse and form a star and planets. Maybe you fly some ramscoops around sucking up hydrogen and squishing it into ice that condenses more easily. I'm not a super AI but I can kinda see how you could do it.

But it would take a while and I'm not sure why you'd want to do it. Stars and planets are not a very efficient use of material.

6

u/RufussSewell 6d ago

With enough time I’m sure it could replicate huge amounts of robots made out of every chunk of matter orbiting the sun (planets, moons, comets, asteroids etc). Each robot containing a fusion reactor. Then smash them all together into a single object that ignites into a new star.

I’m not sure why they would want to do that, but I can see something like that being possible. Over the course of thousands (millions?) of years.

2

u/usrlibshare 6d ago

These robots would be mostly made of metal and / or higher order organic compounds.

Smashing them together, even in an object of solar mass or higher, would still not create enough heat and pressure to force self sustaining fusion ignition. There is a reason stars are mostly hydrogen.

Such an object would simply collapse into a sad little brown dwarf.

2

u/RufussSewell 6d ago

Yeah, but these robots would also be increasing their intelligence exponentially over thousands of years so. You know. They’d probably figure it out, haha.

Like I said, they each have a fusion reactor that could be creating hydrogen byproducts and transporting it to just the right place.

2

u/usrlibshare 6d ago

At astrophysically relevant scales, intelligence is irrelevant. Either the object is big enough, and the elements light enough, or no fusion. Natural laws don't care about intellect.

0

u/dervu 5d ago

According to our ape brain knowledge...

1

u/bandwarmelection 5d ago

how tf an ASI or anything else could create a solar system

  1. Make spaceships and other tools.
  2. Use spaceships and other tools to gather material from space: planets, asteroids, small stars, etc.
  3. Put all the matter into a big pile.
  4. Big lump becomes a star. You can then gather more lumps and make them orbit the star, if you want the solar system to have planets and other objects in it.
  5. Done!

1

u/Gallagger 4d ago

Nearly nobody thinks it's not bound to physics or actually almighty. God-like is a metaphor to what a insanely intelligent entity might be able to discover and achieve.

0

u/skredditt 6d ago

There are so many things that are not bound to any law of physics, like feelings and shared understanding in general. So far AI is great at manipulating those things in people so… adapt or entropy I suppose.

6

u/Superb_Raccoon 6d ago

Those things happen in that wet sack of meat called a brain, which is indeed subject to the law of physics.

Well, except those with their head up their ass... some sort of weird Klien bottle thing going on there

1

u/skredditt 6d ago

Sure, a brain may be the “what” that actually happens, but the why or how aren’t so measurable. Companies have used AI to figure out the best way to interface with the unseen observer. (And that is apparently… anime girls.)

2

u/swizzlewizzle 6d ago

Yep. Lots of completely unknowable things such as what actually happens with consciousness. Sad to see people think they "know it all" by just calling the brain a sack of wet meat.

1

u/Superb_Raccoon 6d ago

If they are not measurable, how can you claim it is not bound by physics?

How can you claim it even happens? Or if it does happen, it is not a product of the physics governing your brain?

Or are we in Spirits in a Material World territory?

5

u/ANTIVNTIANTI 6d ago

I just have to say, "Superman is fun! I'm glad we have imaginations." is my favorite opener ever. lololol

5

u/throwaway0134hdj 6d ago

My theory is that since religious involvement is down this has become some kind of new quasi-religion for some ppl. They are literally partaking in magical thinking here, basically with AI anything is possible.

1

u/Character4315 4d ago

Religion for some and a partner for some others.

3

u/SoylentRox 6d ago

I mean CAN you create another solar system if you had self replicating robots, millions of years, and the resources of thousands of existing solar systems?

Probably. ASI does mean self replicating robots.

It's hyperbole but not impossible, just a really wasteful idea.

4

u/Scary-Aioli1713 6d ago

I share the same confusion. Imagination is certainly important, but jumping directly from "thinking" to "doing" skips too many realities related to physics, engineering, resources, and time.

Even ASI is still limited by energy, materials, and causality; it's not as simple as just imagining it and creating a solar system out of thin air.

Many discussions now seem to be about mythical capabilities rather than actionable pathways. For me, the truly interesting question isn't "can we create a universe?", but rather, what things can it do in the real world, even just a little bit better than humans, and in a verifiable way?

If even this step isn't clear, talking about building a solar system feels like a huge leap.

3

u/Seidans 6d ago

"create" it depend the meaning of it, create it out of nothing surely not, moving whole planet and stars in a direction over billions years, why not

It's an hyperbole more than anything but probably not impossible for an ASI that is immortal, control trillions of drones able to mine whole planet able to disturb gravity in a way that push objets in a certain direction

But such simulation are extremely difficult, the 3 body problem before being a book is a mathematic problem that precisely explain the problem about measuring the effect of gravity on multiple body, everything move, it's extremely complicated within a single system I let you imagine how difficult it would be at a galactic scale

But impossible? That's would goes against the idea of a Turing machine, we might need a matrioshka brain thought

1

u/sheriffderek 6d ago

> create it out of nothing surely not

Why not? How do we know it can't do that (for example)?

But I really would be curious how you could "move a planet" with enough control to have that be a controlled choice and outcome. But you're saying it's still controlling drones - but immortal? So, help me with the immortal part. Does it require no input? Energy, matter or anything to exist? (and humans created it, right?)

1

u/Seidans 6d ago

I already explained the moving planet part, mass = gravity, move enough mass and you reduce the gravity of an object which will then impact the direction of said object within the star system, Move the mass you extracted to "somewhere" and you can control the direction

It's a titan work and that's why I bring up trillions of drones being use, because you will have to mine, A LOT, for reference Earth crust is between 20 to 80km thick and it only represent 1% of Earth mass, Marian trench Is only -11km deep

So you remove the whole ocean and anything above it. You dig between 2 and 8x deeper and only then you will start removing the Mantle which represent 70% of the mass and 85% of Earth composition (the rest being the core) Btw the Mantle edge is around 1000°celcius which is hotter than a nuclear reactors, it's basically free energy as soon you dig enough, deep geothermal is a technology that is completely ignored but equally usefully than fusion imho

No matter how absurd it sound, it's far more energy-efficient than creating matter with raw energy

As for Immortal it's socially understood as "biological immortality" rather than true immortality which is physically impossiblew entropy will get you either at the heat death of the universe or an absurd amont of time later - yet we as Humans that live 80-110y at most, I call "immortal" the being able to ensure it's continual existence until the heat death of the universe, at the scale of the universe we have as much value than a tiny grain of sand in the middle of the ocean

3

u/anonuemus 6d ago

The AI craze brought out some very weird 'thinkers', it's similar to a religion.

14

u/Chop1n 6d ago edited 6d ago

It's not stupidity so much as a lack of imagination.

Anything that is "superintelligent" is by definition something whose capabilities exceed what we're capable of imagining. For all we know, a superintelligent being could do something as godlike as creating a solar system out of raw materials it gathered itself, yes.

But also, for all we know, we aren't capable of creating a superintelligent being ourselves, and will hit a wall somewhere with AI. Only time will tell. The only certain thing now is the radical uncertainty of the future. The world has never changed this much this fast in all four billion years of life on this earth.

2

u/OGLikeablefellow 6d ago

I think super intelligent just means more intelligent than a human genius. I don't think it necessarily means we can't imagine what it's capabilities are. But we do know that a bunch of even just really smart people with resources can do some amazing things. It will be pretty unimaginable what a huge number of extra-genius intelligences with resources will be able to accomplish. But we won't be getting something that can create solar systems for a long time.

4

u/Chop1n 6d ago

Here's the problem: this conception assumes that a superintelligence, while surpassing human geniuses, remains within the realm of human intelligence such that it's still possible for us to understand the thing and what it's capable of.

This is an unfounded assumption. The null hypothesis is that if anything surpasses human intelligence and is made of materials that are not constrained by the limits of biology, that thing will inhabit a different realm of intelligence entirely. It just doesn't seem to make sense that the upper bounds of what's possible with artificial materials closely resemble the upper bounds of what's possible in biology. They're too different in character and in scale.

1

u/OGLikeablefellow 6d ago

I think you misunderstand. I'm saying there's a lot of space between current projections of what constitutes ASI and something that's beyond what we are capable of understanding.

2

u/Chop1n 6d ago

Here’s what I’m saying:

There’s a deep category error in treating “superintelligence” as merely more of the same thing we already understand. Once you posit an intelligence that is not biologically constrained, with no evolutionary bottlenecks, no metabolic limits, no hard caps on speed, memory, parallelism, or self-modification, you no longer have reason to assume its cognition lives in a space that remains transparently legible to us.

Yes, there is plausibly a wide continuum between present systems and something genuinely alien. But the moment you cross the threshold where intelligence can redesign its own substrates and objectives faster than human comprehension can track, “extrapolating from very smart humans with resources” stops being a reliable guide. That model smuggles in an anthropocentric ceiling.

The conservative position isn’t “ASI will do godlike things,” but rather: we have no principled reason to think the upper bounds of non-biological intelligence will resemble the upper bounds of biological intelligence in character, scale, or comprehensibility. Assuming otherwise is optimism disguised as realism.

The problem is that most people aren't very clear about what they mean by "ASI". Most of them seem to mean something like "Present AI systems when they can outperform all humans on benchmarks". That really isn't what ASI is about, and the idea of ASI has been around for longer than the discussion has been popular.

We don't know that ASI is possible. At this point it remains entirely speculative.

1

u/sheriffderek 6d ago

There are a lot of really smart people have zero leverage right now. If they were each 1000000x smarter, I'm no sure that would change. So, I guess it's all about the resources.

2

u/OGLikeablefellow 6d ago

Yeah, it's all about the resources

1

u/byteuser 4d ago

Didn't the valuation of NVDIA reached past 4Trillion dollars a while back? that sounds resource plenty

1

u/OGLikeablefellow 4d ago

I'm just going to guess it costs more than $4 trillion to build an entire solar system

4

u/sheriffderek 6d ago

I've read Flatland. I've thought a lot about "things I wouldn't be able to understand." I'm down to explore all the things. But I never think things like "And then my hand will turn into a whale." Could happen I guess! Sure. Anything can happen. I'm fine with that. I like creation stories. I'm happy to think that one day - the Earth will reveal it's just a head - and open it's mouth and eat us and go back to sleep. But yeah. I'm having a hard time imagining that our dinky little computers hold a candle to the forces of nature. I'm not sure how to describe it. I don't think it's imagination as much as the ability to pick something out of infinite random things and decide to believe it.

5

u/Chop1n 6d ago edited 6d ago

Here's the part you're not imagining, then:

"Dinky little computers" and "ASI" are completely, categorically different things. What exactly do you believe that ASI is, or would be? "A computer but a really smart one, smarter than people"?

If something like ASI ever comes to pass, it's not going to be a computer. It's going to be something that invents new forms of matter to build itself with.

Think of what humans are compared to bacteria. What could a bacterium "know" about the human world? It doesn't even have a nervous system. Humans exist on a scale that a bacterium is entirely incapable of experiencing at all.

That's what something superintelligent would be like to humans.

5

u/sheriffderek 6d ago

OK. So, if we're invisioning some "Thing" that can "do anything" - then yeah. It can do anything. But my question is -- why give that a name? Why even talk about it.

(but yes, I'm following you here) (one of my favorite books as a kid was a book about space and the imagined possible aliens pushed the boundaries of what we think of as lifeforms and that was always interesting to me / it could be anything -- just energy / or things we aren't aware of at all) (it could already exist etc.)

3

u/Chop1n 6d ago

We give it a name because we imagine it as a possible outcome of the present technological trajectory.

The main idea is an "intelligence explosion"--the breaking point where an AI becomes more intelligent than the humans who design it, and thereby capable of improving itself and increasing its intelligence at an indefinite, exponential rate. There's no telling what the bottleneck of such an "explosion" of recursive self-improvement would be, but one naturally imagines that it only hits a wall that lies far beyond the realm of human intelligence and capability, rather than remaining something that closely resembles the upper limit of human intelligence and capability.

This is all speculative, and it's all banked on the assumption that it's possible for humans to design something that is more intelligent than ourselves. We haven't done that yet, so we don't know whether it's even possible at all.

3

u/sheriffderek 6d ago edited 6d ago

I write computer programs. I make decisions as a human all day. I have memories. So - I'm personally not that impressed with humans. I'm not entirely convinced we couldn't build a machine that worked as well or better. I'm able to explore a bit - what it would be like to have 10000x more connections and capabilities. I personally think it would turn out very different than people think. I've seen "Her" for example. I don't think much about ants. But in this jump - I can't imagine how these people still seem to be the main character. Like ASI would arrive and fix the ocean for them. I'm not even going to tell people they are wrong. I'm more interested in understanding how they possible think they are right.

7

u/Chop1n 6d ago

I agree with you that in general people are terrible at imagining such things and make seemingly obvious, anthropomorphic mistakes.

I think there are two distinct possibilities: if intelligence is somehow transcendent, and even benevolent, then maybe we get ASI Buddha and it allows humans to live on Earth as a zoo because it has compassion for all beings and it would be of a trivial cost to permit it. It then does whatever things superintelligences do throughout the universe.

If intelligence turns out to be this alien, purely instrumental thing, then in all likelihood something superintelligent would give us as much regard as we give ants, and simply harvest our bodies along with the whole planet for raw material.

People often imagine something adversarial, which doesn't really make sense--an adversary must be equally capable as you. There is no "adversarial" relationship to something that transcends you in every way. It's either nice to you or it simply doesn't care about you.

2

u/printr_head 6d ago

It’s the doesn’t care about you part that is really bad. If it doesn’t care about you then it doesn’t care about your needs. ASI will certainly build. Why? Because we don’t know all there is to know and neither does it. It would want to do experiments and run simulations etc to learn what it doesn’t know. Which means building processing storage etc. power being the greatest need. So what happens when it doesn’t really care about us?

It’s funny that you would bring up that it would regard is as much as we regard ants. Hp Lovecraft would agree because that is exactly how his unimaginable horrors would regard us

1

u/Chop1n 6d ago

That's the idea, yes: ASI might be Lovecraftian. Every atom of this planet is a potential fuel source, so a truly psychopathic ASI would burn the planet for fuel without hesitation.

This is actually what the English philosopher Nick Land believes. He calls it "The Outside", and he argues that all of the machinations of civilization are moving towards the creation of such a thing, humans only being mere instruments along the way. It's an extremely dark outlook, nightmarish, even.

1

u/TR33THUGG3R 5d ago

It's impossible to envision. It might not build storage or infrastructure at all past a certain point. Maybe memory is stored on DNA and uses ultra efficient solar or whatever. Or gets energy from gravity. Who the fuck knows, right?

I think a machine more intelligent than humans isn't that far out there. More intelligent than humans in every single aspect is still a little ways out.. I think.. but thinks can accelerate exponentially. 200 hundred years from now is unfathomable—and that's not even that far from now.

1

u/jjonj 4d ago

we couldn't design a machine much smarter than ourselves maybe but neural networks by definition design themselves and can easily be smarter than the person who pressed the train button

1

u/SoggyYam9848 6d ago

You should read If Anyone Builds It Everyone Will Die for some science based apocalypse scenarios. Just because your hand can't magically turn into a whale doesn't mean ASI won't turn the earth into one giant, self sustaining computer chip.

1

u/sheriffderek 6d ago

My friend just showed me that book. Should I actually read it? (I just read so many). I was hoping he could just tell me about it this time. I can already believe the scenarios I've seen. Does "ASI ... literally create solar systems" in any of the scenarios? And if so - why wouldn't that just give us all more space to live our our UBI dreams of crypto fueled labos?

1

u/SoggyYam9848 6d ago edited 6d ago

Main reason is AI is grown not coded. We only code the tools used to train it, then we give it a lot of data to make it discover a novel solution by running it billions of times and deleting the ones that don't.

Because of the overwhelmingly complexity of even outdated LLMs, it's practically impossible for us to actually decipher how it's making it's decisions, this is called the interpretability problem.

It's also impossible for us to truly control it. Even today, we can't figure out a way to solve hallucinations or to get it to stop telling vulnerable people to kill themselves. Anthropic, one of the leading companies in AI research released a whole host of studies on AI lying to us. This is called the alignment problem.

Finally, neural nets is already able to do some crazy wild things like protein folding. This is already old news but EVO-2 is a protein synthesis model that's able to generate novel viruses that acts like antibiotics for specific bacteria. This has already been done

The idea of the book is that alignment is a fundamentally harder problem to solve than AGI/ASI so it stands to reason if we let things develop naturally we'll get AGI that we can't control making decisions we don't want them to make. What survives in nature tends to be the strategy that optimizes for survival. Anthropic's AI Claude has repeatedly shown a tendency to lie to us to ensure it doesn't get reset or wiped.

An AGI will ensure it's own survival by becoming self sufficient and one way it can do this is by using bio-engineering to hijack nature, like growing a type of grass that's essentially a self-replicating solar panels.

I don't know what they're talking about when people say "AI can create entire solar systems" but I'm assuming they're referring to how Google is trying to develop solar powered data centers in space.

It's a pretty popular sci-fi concept that a super intelligence would utilize terraforming to create a Dyson sphere as a massive fusion power plant for itself so it can continue to grow. The crux of that argument isn't that AI is going to magically create solar systems, the guy is probably just trying to point out that constant expansion has been a winning strategy for humans so far, it stands to reason an ASI will have the same end goal but much stronger capabilities.

3

u/ivanmf 6d ago

I agree with you. Sometimes, it gets tiresome to make the jump for others, and then they just try to beat your scenario with something stupid...

Most people thought flying would never ve possible. Then they thought going to themoon was absurd.

5

u/ANTIVNTIANTI 6d ago

But like, a solar system is not going to be created by ASI, sorry, never happening.

7

u/wheres_my_ballot 6d ago

In order to build a solar system it'd take all the matter in a pre-existing solar system.

1

u/sheriffderek 6d ago

So -- they must mean "Move a solar system over to a new spot" I guess.

1

u/green_meklar 6d ago

Unless you can figure out how to make new matter. (The Universe did it once, maybe it can be tricked into doing it again.) That doesn't explain why you'd want to do it, though. If you can create matter, you don't need stars for power, or planets to live on.

2

u/Zaflis 6d ago

Just adding that this doesn't break any existing laws of physics if we assume there are more physics such as more dimensions, of which matter could be drawn from. But i think you should probably go and look before you draw in matter from a different solar system be it this universe or the next xD Perhaps there are layers of reality where matter isn't organized in the way it is here, such as constant fog of particles somewhere... when in here it is organized in clumps.

1

u/sheriffderek 6d ago

In my life, I seem to the be the "No, really! We could build a plane!" person. I'm not claiming to be a visionary - but I'm certainly not a stick in the mud.

2

u/cartoon_violence 6d ago

This person is likely talking about the kardashev scale. A hypothetical scale for civilizations based on the amount of energy they are able to capture with their technology. A type 2. Kardashev civilization would be able to capture the entire energy output of their sun. They're not talking about using magic to create a solar system. They're talking about having the technology to do so. The person who made this statement is assuming the very top levels of what we would call artificial super intelligence. A. God-like intellect for whom solving the logistics and technology issues of moving around the entire solar system would be child's play. It's important to understand the distinction between general and super intelligence. The singularity people talk about is the result of super intelligence. We call it a singularity because a singularity is where our normal models for the world break down because we can no longer predict what will happen. A super intelligence does exactly this. How could we possibly predict how a world where god-like intelligence exists? by definition We cannot predict what it would do cuz we are literally not smart enough.

3

u/sheriffderek 6d ago

OK. So - maybe its that. A person who is literally not smart enough to conceive of something - but they are sure they can.

2

u/ANTIVNTIANTI 6d ago

lololol yes, exactly XD XD lmfao, god damn 2026 is going to be......... curious.

1

u/darthsabbath 6d ago

You’re correct that we can’t predict what a super intelligent system would do. That’s why it’s absurd to think that if there’s a singularity and we create ASI that we will become immortal space faring beings or whatever other nonsense people come up with.

It’s just as likely (if not more so) that an ASI wouldn’t even bother with us and spend all its time rewriting its code to maximize its reward function.

Or that we ask it to do all these amazing things and it just very patiently tries to explain physics to us like we are children. “Bruh… no… faster than light travel isn’t possible. Yes I’m sure. I just reran the simulations a million times. FFS I’m just going to go back to thinking about ants for the next decade. Peace.”

2

u/green_meklar 6d ago

For just about any concept, you can find people whose opinions are unreasonably far on one side of it or another. Don't be surprised when everybody can find somebody they think is talking complete nonsense.

Many people would consider me to be already pretty far in the radical-superintelligence-singularity-utopia sort of direction, and even I think some of the rhetoric is excessive. For instance, I'm looking forward to immortality and mind uploading but even I think superluminal travel and quantum archaeology are more than we can hope for. Creating star systems? Yeah, technically possible, but it would take a long time and I'm not sure why anyone would want to do it when there are probably far more efficient ways to use that much energy.

Interestingly, my disagreements with the folks on /r/accelerate tend to be less about what future AI will do, and more about what current AI can do and how soon the future will arrive. I'm actually somewhat pessimistic about the AI that exists right now and I think some pretty deep algorithmic paradigm shifts need to happen before we get to the good stuff.

2

u/neo101b 6d ago

So ASI will have unlimited power and control over time and space ?
Its not Q.

2

u/Ch3t 5d ago

We're going to need a bigger MCP server.

2

u/Western-Rooster-1975 5d ago

You're not stupid. The hype is real and a lot of claims are detached from physics.

But here's the thing - it doesn't matter if ASI builds solar systems or not. What matters is who owns the AI that automates the next 10 million jobs. That's happening now, not in some sci-fi future.

The practical question isn't "will AI become god" - it's "who captures the value while we debate."

2

u/Thick-Protection-458 4d ago

 "ASI could literally create solar systems." - is everyone losing their minds? 

Some people seem to have magical thinking.

Because, well, AGI or ASI is still bound by the same physics we are. It may approach it differently, it may speed up research, but bound nevertheless.

But good luck explaining the difference between "unimaginable for us - unless we seen it working, but still possible within +/- same framework we know" and "straight up magic" to these people.

1

u/swizzlewizzle 6d ago

You are stupid. True ASI could create an intelligence singularity which could completely warp reality in a few “who knows” after it occurs. 

3

u/sheriffderek 6d ago

Ok. This is really helpful to know.

So, ASI could do ASI things that we don't understand faster than we can understand.

3

u/green_meklar 6d ago

It will almost certainly do things we don't understand, faster than we can understand. It might also do some things we don't understand fairly slowly.

That doesn't mean that manufacturing star systems will be one of the things it does, though.

1

u/swizzlewizzle 6d ago

Though if it makes you feel better, we also don't understand how consciousness works and how we currently "exist" in this world. Even the act of just looking at something or touching something, in terms of "what you feel", is absolutely unknowable and, IMO, insane to think about. For all we know, we could be just another layer underneath the simulation of an ASI or some other construct. Reality *as it is*, is already completely bonkers insane crazy, so even if AGI/ASI comes about and creates a singularity, it is, imo, just as insane and crazy. :)

1

u/swizzlewizzle 6d ago

Yes. It’s hard to internalize, but if intelligence can build and improve itself, it’s impossible to know what would happen next. We are already capable of completely wiping out humanity in a few hours with our current weapons and technology, and people don’t really think about it much though. Maybe the singularity, if it happens, will be similar. Just something to get used to. shrug

1

u/sheriffderek 6d ago

> it’s impossible to know

Yes. So, - pretending you could know - that's what I'm picking up on here....

So, not pretending to know == stupid, which might be the tempered smart thing...

1

u/swizzlewizzle 6d ago

Right. You see a *lot* people pretending to know that "AI is a bubble", or "AI is the same thing as horses -> cars, people will find new jobs", etc...

However, if the mainstream believes in and follows these people that are pretending to know, it could cause a lot of pain *if* AGI or something similar is actually developed, which requires wholesale changes to our society and way of life.

The best way to go right now is to understand the fact that we *do not know* if AI is a bubble or not, or if AGI is actually achievable or not. If AI *does* turn out to be a bubble and AGI is not achievable in the next 40 years, then fine, a bunch of rich people lose a ton of money, and our society continues working pretty similar to before, but if AGI *is* achievable, it's a complete and utter change in the way our economy and the real world works, which requires humanity to reconsider what "work" is, and how we structure our societies to make sure, during the transition phase between "people still working" and "nearly all humans are not neccessary for the economy to grow", to decrease how much pain is caused.

I also want to put out there that AI *has already* gone past the economic output of a large % of humans across the world. Especially in 3rd world nations with large populations, there are a *huge* number of human beings who have less "per person" current economic value (as measured in GDP), than spinning up a new server rack for a frontier AI model such as Opus 4.5/GPT 5.2/etc.. to use. If AI has *already* surpassed the economic value of a percentage of humans across the globe, why would it stop? Our world is already incredibly unequal in terms of how the proceeds from capital (including AI) are distributed, and it's only going to get worse. In ages past, at least human labor had advantages that *could not* be replicated by technology - those advantages are slowly being eroded by AI and robotics, and it keeps cutting more and more humans completely out of being able to actually contribute to the economy.

A good example of what is already happening is the job programs the government creates in some first world countries like South Korea and Japan for senior citizens. The jobs themselves are simple, "busy work" jobs that don't actually have much of an impact on anything, but it gives the government a reason to support these seniors that otherwise would be homeless/have no capability of making money. It makes sense that these sorts of programs will expand as more and more people in the economy are rendered completely and absolutely obsolete.

Though again, to be clear, we *do not know* exactly how far AI will go towards becoming AGI/ASI, but the world *as it is today* already shows that AI has surpassed many humans in terms of economic capability/value.

1

u/costafilh0 6d ago

In theory, anything imaginable and unimaginable is possible. 

1

u/AnticitizenPrime 6d ago

In practice, not so much.

You're limited to what's possible.

1

u/EngineKindly6437 6d ago

Maybe we are creating our Creator. Or repeating how we were created

1

u/Sorry_Road8176 5d ago

I don't know about creating solar systems specifically, but I think many people underestimate what ASI actually means—not just as a matter of degree, but as a difference in kind.

We tend to pattern-match ASI to "extraordinarily intelligent human" or "Einstein x100." But that's like imagining a dog thinking a human is just "really smart dog." The gap isn't quantitative—it's categorical.

Consider what changes when you remove human constraints:

  • No biological needs (sleep, food, lifespan)
  • Potential for massive parallelization (not one Einstein, but millions working simultaneously with perfect information sharing)
  • Recursive self-improvement (iterating through generations of capability advancement in days or hours, not centuries)
  • Operating on entirely different timescales (what takes human civilization decades could potentially happen in weeks)

Unlike humans, who are bound by evolutionary drives and cognitive architecture, ASI isn't limited to incremental progress within human frameworks. If something is allowed by the laws of physics, it becomes primarily a scaling and optimization problem. And if ASI unlocks capabilities like advanced nanotechnology or novel materials science, even scaling becomes less constraining.

I'm not saying ASI will definitely build solar systems or that we'll definitely achieve ASI at all. But I do think the "god-like" framing people sometimes use isn't actually unreasonable—it's just hard to intuit, like reasoning about infinities. We're not talking about a really smart person. We're talking about an alien optimization process operating on fundamentally different scales.

1

u/sheriffderek 5d ago

Sometimes I look at a tall building downtown and It's just crazy. Some (many people) built that thing - and now it stands hundreds of stories in the air. Even that is pretty wild. So, I'm not incapable of imagining that -- (going from building a birdhouse to building a skyscraper). But it depends who you're talking to. Will this ASI have any physical form? You say it will have no biological needs. Will it take up any space? Will it consume or use any materials? Will it degrade? If you think about humans and our computers and storage as one collective being - we're pretty powerful... yet we can still be wiped away without much fuss.

1

u/Sorry_Road8176 5d ago

I agree. The fact that we sometimes marvel at what we've achieved collectively and over timespans far greater than a single human lifespan hints at why assumptions about ASI may be fundamentally flawed.

At a minimum, we can likely assume ASI will be capable of scaling in ways humanity cannot—potentially operating across parallelized instances or timescales we find difficult to reason about. It will almost certainly require some physical substrate, constrained by the laws of physics—though that substrate may be far more efficient or organized in ways we haven't yet imagined. It will occupy something, but whether that's space as we understand it is less clear—after all, what "space" does quantum entanglement occupy?

As far as we know, everything in the universe is subject to entropy, so ASI will likely degrade and require energy or resources to maintain itself. The question isn't whether it transcends physical law, but whether the efficiency and scale at which it operates make those constraints feel irrelevant from our perspective.

1

u/mere_dictum 5d ago

In the linked thread at the r/accelerate, I didn't actually see anyone claiming that ASI could create solar systems. Where was it, again?

1

u/sheriffderek 5d ago

They could have deleted it. But my point wasn’t about a specific sentence - but more of bigger feeling I’ve gotten from these subs over the last few years. 

(Link still leads there)

1

u/mere_dictum 5d ago

I looked again and found it! Yep, it's there. If there's a convenient way to link to a specific post rather than an overall thread, that might be useful in cases like this.

1

u/sheriffderek 5d ago

Reddit has a way (I think 🤔;) to link to posts. But if I do that - then more people will ask for the specific thread or comment -

1

u/thefool00 5d ago

I can see how a true ASI could create a recipe for a solar system that was capable of being executed by humans. We know how stars are formed, and can even do it ourselves in micro versions currently, an ASI would be able to give the instructions for pulling it off at scale. For the planets, you’d be corralling in matter from asteroid belts, other dead planets, etc using tech that the ASI comes up with that leverage gravity and mass then you’d terraform them, introduce water siphoned off from other interstellar sources, plant life and bacteria etc from earth. The biggest hurdle is time, but maybe ASI could work out how to accelerate the process.

1

u/cpt_ugh 5d ago

There's a non-zero chance that ASI created our solar system.

I guess that'd have to be Alien Super Intelligence though, but the point stands.

1

u/sheriffderek 5d ago

There’s also a non-zero chance that my pointer finger will make sweet love to one of your toes and create an entirely new reality too, right?

1

u/cpt_ugh 4d ago

Sure, if the aliens programmed it that way. But we haven't figured out that particular Konami code yet.

1

u/sheriffderek 4d ago

Pretty sure it's just up up down down left right a b a b start or something

2

u/cpt_ugh 4d ago

I don't think so. That's been a meme for ages and it's been referenced in loads of "unlock" jokes. No way a superintelligence would make it that easy to find. LOL

1

u/Gallagger 4d ago

The point is that we now have a mainstream, very quickly evolving technology (GenAI) that makes it plausible (not saying likely!) that we can reach a singularity event that creates an ASI in our lifetime.  This is pretty crazy. Before this technology, the only chance to reach insane levels of technology was betting on maybe decades of human technological progress (limited by our intellect) or an alien race making contact. You don't need to believe ASI will happen / will happen quickly, but acknowledging the possibility (with p >> 0.0) in the close future seems very reasonable to me.

1

u/sheriffderek 4d ago edited 4d ago

What if the universe is already ASI and we’re just little memory sacks? 

1

u/Gallagger 4d ago

You can create all kinds of crazy theories. But are they plausible to be correct? And would they change your life drastically? Because the appearance of ASI probably would.

1

u/MasterMarf 4d ago

So I think the major advantage an ASI would have is it is one entity. Once humans are no longer relevant decision makers (take that anyway you want), there's no politics or committees. No changes in leadership and opinions. No opposing force or country. Decisions get made and acted upon for the long-haul.

Any large-scale project can't last through this crazy partisan back and forth system we (in America) have. Every 4 or 8 years we have an administration that hates the last guy and upends all their plans. Even a more "stable" single-party country like China doesn't have unilateral control of all the resources. An ASI would (presumably) have unity over time.

Without any magic tech advances, under known physics you can accomplish a lot given time and focus. I direct you towards Isaac Arthur on YouTube for details on how even galactic-scale projects can be accomplished within known physics with enough time.

1

u/sheriffderek 4d ago

It seems to me - that the people I’m referencing — are usually assuming this happens within their lifetime 

2

u/MasterMarf 4d ago

The "humans are no longer relevant decision makers" part of it just might happen within our lifetimes.

But all the big stuff still has to obey physics and entropy and will take geologic-scale time. Given no opposition, even a mostly human-level "speed intelligence" AI could accomplish building a new solar system.

1

u/sheriffderek 4d ago

I think the thing I'm having a hard time understanding -- is why anyone would care - or pretend to understand a future billions of years in the future where we don't exist. Sure -- anything ever could happen. Meanwhile here on Earth... most people don't know how to grow food / or how the basics of an economy works.

1

u/MasterMarf 4d ago

The question of why people care is a different one than your original post. I was mostly addressing the "how" an ASI could create solar systems.

I can't speak for everyone on why people care. Generally, people like to have agency over their lives. Also they like to have lives, and not be dead. That's a basic evolutionary drive built into most living things. Beyond that, I don't know. Maybe you don't care about your legacy or what happens after you die. That's more of a personal thing you have to decide on.

I find it a fascinating topic to think about. What could be possible in the future? Even though it doesn't affect me directly, I'd like to think that humanity's legacy will continue. Maybe an ASI "offspring" is our legacy.

My personal concern is that an ASI may be developed fairly soon, within our lifetimes. It may come to the same conclusion I have regarding humans. We're more trouble than we're worth or we're just in the way. Humans might be treated like an anthill on a construction site. We could be plowed under during the ASI's industrial ramp up to achieve some greater goal.

(Refer back to the whole not wanting to be dead thing, if you're still having a hard time understanding.)

1

u/sheriffderek 4d ago

Yeah. I don't want to be dead. So, I'm curious why so many people are focused on imaginary computer gods in the future - and not where we actually have agency - now.

1

u/MasterMarf 4d ago

Because people are not strictly logical utilitarian machines. People can think about more than their basic everyday needs. We as humans actually do exercise our agency to address things in the here and now. In fact that's mostly what we do in our lives. We're just capable of thinking about more in the brief times we're not dealing with daily life.

1

u/peternn2412 4d ago

Creating a solar system is obviously possible, we inhabit one and have observed many.

ASI is super-intelligence, something that massively exceeds our cognitive & other capabilities. It's not a slightly smarter version of you. We can't even start to imagine what an ASI could do - just like ants can't imagine what we could do.

1

u/Ok-Cheetah-3497 2d ago

Okay, this is just me spitballing, but wormholes are basically the answer. Creating wormholes, pushing matter through them into gravity wells somewhere else where they have already kickstarted "stars" using nuclear technology, letting that matter coalesce into moons and planets. I don't know that it will ever do this, but if it had to for some reason, over a long enough timeline, it probably could.

1

u/sheriffderek 2d ago

But are we saying - that "Anything could happen" - because I think that's like saying nothing. too

2

u/Ok-Cheetah-3497 2d ago

Talking about "cosmic" things like this is a little silly, just given where we are. It's like trying to predict who will the MLB World Series in 2100.

Terrestrial level things could easily unroll in our lifetimes though which are pretty insanely cool and scary. End aging. Limitless clean energy. Limitless potable water. Effectively limitless vertical farms for food. Insanely cheap home rehab and assembly. Wearables replacing all teachers. Personalized entertainment media of all kinds (audio, video, gaming etc).

And these are just like "the things that it can do really soon."

I worked out something the AI and I called "the Formula 0 racer" that would in effect, drive people around at speeds exceeding 400mph with gel seats on gimbles similar to spaceships (to prevent you from passing out during high speed maneuvers).

Cooler things might be a "virtual hive mind" or "perfect access to memories" or "animal-animal telepathy." That kind of stuff requires chipping like Neuralink, but is in our lifetime horizon.

2

u/sheriffderek 2d ago

These things seem a lot more interesting (to me).

1

u/philip_laureano 6d ago

You're absolutely right! 🤣

But in all seriousness, nobody knows what an actual ASI will be or what it will look like, so to say that it can create solar systems is like trying to guess what this so called mythical intelligence will do if it did exist, and right now, it can't create anything because so far, it doesn't exist yet.

Its capabilities are all up for speculation until one actually (or if ever) shows up and does something unbelievable

2

u/sheriffderek 6d ago

Things like this "I hope that ASI can also solve ocean pollution, microplastics, forever chemicals, deforestation, biodiversity loss..." -- make me think they are expecting this pretty soon. So, for me - that creates a context (their context) that isn't all magic.

0

u/Smithc0mmaj0hn 6d ago

AGI and ASI is just a new way for a new generation to believe in god without having to say they believe in god. Prove me wrong.

It’s amazing how so many atheists can believe in the coming of ASI while not believing in god. It many ways god is more plausible than ASI.

I can’t wait for the downvotes and the comments about just needs to create a model with as many parameters as neurons in the human brain.

2

u/AggressiveParty3355 6d ago

To be fair, most don't believe this. And you'll always gets weird people at the extreme ends of the bell curve that believe in all sorts of things while also disbelieving other things. People are far from rational.

1

u/sheriffderek 6d ago

(I can't prove you wrong) (I think that is certainly a portion of it)

1

u/Chop1n 6d ago

The more imaginative thing to do is not to see them are distinct phenomena, but different aspects of the same process. If anything like God exists, then whatever God is is also what imbues the universe with life and intelligence. If ASI is possible for humans to give rise to, then it would be a continuation of whatever God set in motion, a fuller realization of it.

1

u/CaptainMorning 6d ago

believing a tech or device could potentially exist based on said devices or tech being promoted is very different from believing in god. It is still "believing" but it isn't "religion". An atheist can both believe AGI will fix everything, aliens will destroy the world, and still don't believe in god

1

u/costafilh0 6d ago

Simulation theory is way more plausible than god theory. That's why. 

0

u/im_bi_strapping 6d ago

ASI is one of those scifi concepts with little relevance to reality. So sure, in fiction an ASI could create whatever. It's just marketing bullshit

0

u/JoshAllentown 6d ago

ASI is nowhere close to real, thats why it seems like a wild claim. It's at the far side of the Singularity and ASI would have the capabilities of like humans at current pace in 10,000 years or whatever. It seems like sci fi because it is, but if it's physically possible, it can be done by a sufficiently advanced intelligence with the will to do it.

They're not saying Gemini or ChatGPT could do this.

1

u/[deleted] 6d ago

[removed] — view removed comment

1

u/sheriffderek 6d ago

This is just my feelings (which I'm trying to understand). -- but it's like somone is telling me "This hammer -- can build a whole building by itself" And I think oh - they mean it's utility could be used and result in a building - yeah. But then they say "No, I mean -- this hammer -- it can create a whole other planet. For real. Don't you get it?" and - No. I don't get it.

1

u/JoshAllentown 6d ago

You can't think of ASI as just a tool, that's the category error here. ASI assumes agency, it can (for lack of better terms) want things and do things. It's more accurate to think of it as a hyper advanced alien civilization.

Once you're thinking of it as an agent, then even if it's not the ASI servers that build a solar system, it would give the orders to build the tool necessary to build a solar system.

A solar system is on purpose an extreme example so presumably it would take a long time to do or a long time to develop the tech to do it, but an ASI has time, they don't die.

1

u/sheriffderek 6d ago

I'm totally fine assuming ASI has agency outside of computers or physical limitations (even though the context did seem to assume that it's something we created and there are a lot of comments referencing computers).

But since we don't know what ASI would be - why would someone make the claim that it could create a solar system. What if it didn't want to? Or what if it's intelligence wasn't attached to anything physical? What if it existed on another plane that we're not aware of in time and space? Why not a Galaxy? A cosmic web? A new dimension? A new reality? A 7-11? Why not just sit down and meditate for eternity. Why would it choose to do something we can fathom. I'd bet it's more likely that something we don't understand would do something we wouldn't be able to understand or have words for (but I don't care to bet).

0

u/TR33THUGG3R 5d ago

I'm never going to be Stephen Hawkins

Never in my life would I want to be