> The best advice I can give anyone that wants to start a company: ask yourself why do you want to start a company? If the reason is anything other than "there is this huge problem that needs to be solved and I know a way to do it!" then you are chasing a bandwagon of the glory that comes from being a successful technology entrepreneur, and aren't actually building a company with any value behind it.
> I would imagine that in any of those situations some doctors, authors, and musicians alike would be devastated.
You don't even have to compare yourself to AI for this mentality though. There are people who choose not to compete in things because they don't believe they'll ever be as good as other humans.
I assume must composers don't go into music thinking they are going to be as great as Beethoven.
I believe there are many studies that show that if you only do something because you think you're good at it, you're likely to drop off. I imagine it's also why you're supposed to praise children for being hard working and not for being smart or talented.
As a person who likes music, making it, listening to it, breaking it down and hacking it...
Making a classical arrangement that evokes a particular expression in the listener is the job of the musician. If an AI system helps you explore the possibilities there, it's more like a studio musician that's able to improvise. You're still the person, the human, the emotional filter, that picks "This sounds right" or "This doesn't" for a particular situation. It's a judgement call. An emotional one.
An AI might be able to fake it, communicate with it, but it will never replace humans choosing the sounds that please them more than others. Humans communicate through music. It wouldn't surprise me that an AI would be able to as well. I don't think it would necessarily write emotionally strong music, not without human training.
Edit: I guess what I'm trying to say is, sure, computers might be able to make music. Ask any guy who messes with modular synthesizers. But they're a tool. The fact an AI can express itself through music is sure as hell not gonna stop me from also expressing myself. It's like arguing "Since AIs will be able to comment on Hacker News, humans won't."
>It's like arguing "Since AIs will be able to comment on Hacker News, humans won't."
I'm not so sure. I often go into threads on HN and realize that every idea I could come up with on the subject has already been expressed better than I could do it, with greater expertise, and cited sources. I don't comment in those threads. If AI bots could populate a thread with every likely human thought and argue it with depth and sophistication in a well reasoned, yet carefully approachable and well-explained way, well then... again I don't think I'd feel like I would be adding much value by participating.
And yet, here I am, bringing up something no one seems to bring up in the thread. One would also logically come to the conclusions that disparate AIs with disparate interests would find different things to express, to make music about, to draw about.
What distinguishes music written by AI from music made from humans? I have a story to tell. If the AI has a story to tell, one that speaks to our human emotions, it might make good music. But the point is to communicate. Even if you take, for example, someone else's words, fit them to a different model in a different field, viewpoint... You might get interesting things. You could make a cover of someone else's song, with your twist. Adding your emotion to the melting pot. AIs might be good at that, just like that, but only through communicating. Just like us. We have no idea whether they'll be better than us at doing it, or merely equivalent. We have no idea what is lossy in our sharing of mental models. Perhaps it is an unsolvable problem, which we will find out in the same way we found out about Gödel's Incompleteness.
It seems to me like we fail to understand how unique we are. We are in a unique position to shape what comes after us, and we are blind to how much we unconsciously select for things. We have an innate mental model of "humanity" we are trying to transmit to machines, and I am not sure we fully grasp it well enough to make sure we are creating something like us. We fail to do it properly to humans, sometimes, who actually do share most of our instincts and habits. Something entirely different from us? Color me skeptical.
What your comment suggests to me is that good composition requires an agent with a world model and generalized task-solving ability, along with a personality. I think developing the world model and task-solving will be the hard part, and if we can do it, it won’t be that hard to make it have a personality too. That’s just another task.
What my comment is trying to suggest is that AIs are not proven to be different from us. They might not have one "ultimate" form. They might be just like us humans. Diverse.
>>>The fact an AI can express itself through music is sure as hell not gonna stop me from also expressing myself.
I think this is the key; if you're making music for your own reasons, no AI (or Mozart) would stop you. But if you're trying to make money at it, or desperately want listeners, you may eventually be on the "losing" side.
Would it? Popular music sees major paradigm shifts every few years, and AIs only really generate things based on observation of existing patterns, at least as far as I can tell.
As far as recent examples go, Lady Gaga and Lorde were major breaks from what was prevalent at the time they started releasing music, and then spawned artists trying to emulate them.
A pattern implies that it can "infer" something in the future.
If we oversimplify and compile a list of traits about "the world" as it was in the past that allowed a new genre or artist to flourish, AI could predict that in the future. It isn't like the paradigm shifts just happen in a vacuum.
Granted there are probably millions of little things that lead to this, stuff like the shared experiences of an entire generation coming of age, political climate, trends in other industries, etc. Not that I believe it will ever happen to an accurate enough degree, but theoretically I don't see why it could not be possible to approximate given time and resources.
A lot of those things are completely random and non-predictable, to be honest, no one can predict which paradigm will win and take over for the next decade. Especially since when a game-changing paradigm comes, it is usually not received well universally at all, until the moment it takes over the public conscious completely, and then the switch is flipped.
If you feed an AI a bunch of modern car designs and ask it to design a new car, it will design you something like a modern ford or honda/toyota, but it will never design something like a Cybertruck. Which I believe will be the next paradigm shift in the design of trucks (that has been super stale and stagnant for at least the past 20 years), but this is yet to be seen.
For an example with music that has already happened and became apparent - Kanye West's "808s and Heartbreak" album from late 00's. On release, it had very polarizing reviews, most of which were skewing towards "really weak and weird". Fast forward 10 years, most of hip-hop and pop music is directly influenced by that album, most of top 50 albums use similar patterns and methods used in that album, and critics have made a complete 180. So now 808s is hailed as one of the biggest (if not the biggest) paradigm changes and influences in music of the past decade as a whole, as well as the best album by Kanye, despite at the time being called the worst. Imo an AI trained on music of 00's that came before 808s would have never been able to come up with something like that, but it totally could've come up with another top 100 song using existing paradigms.
It doesn't have to be like Kanye's album at that point in time to be a paradigm shift, though. If 1 artist didn't get big or some genre blow up then it would have just been filled by an infinite amount of others that we never heard. Even considering a single artist hitting it big there are how many that are never heard of? An AI could produce an equal number of artists and only has to win once every month/year/etc. I think this is similar to the million monkeys at a typewriter thing.
It's hard to say - maybe for a sufficiently advanced AI, Lorde's style would be an obvious extrapolation from the popular music of the time. Certainly we're not there yet, and it's an open question if we ever will be - but I wouldn't be terribly surprised if one day AIs can make better music/poetry than the best humans, by any metric we care to use.
I'm always going to enjoy a person coming and showing a bit of themselves through their music.
That's not something we can really lose without losing something that connects us. People want a story. That has sold since the beginning of time, and it will keep selling. People will keep being moved to music, giving money to the artists that inspire them, and that requires connection. Maybe an AI/human team would make some really incredible stuff, and I'd be willing to pay for it if it makes me feel something. I think the human touch of "selection" will never truly leave, even if only in the listener's mind...
I think the problem with music is that there is no "objectively good" music composition. It remains entirely subjective and all criteria that are used to differentiate between "bad" and "good" albums are highly subjective. (Maybe something like "originality" might be measurable in some way but even there it gets tricky really fast)
So music generation (similar to poetry) is imo a completely different problem space altogether.
Real, authentic music generation is a harder problem than go or chess, but I'm not sure that makes it any more emotionally difficult for a future writer to face a true musical AI than it was for Lee Se-Dol or Kasparov.
It might be hard to judge. Some people will insist that generated music is bad, because it's just their subjective opinion, even if 90% of random selection will find that music good.
You are splitting hair here. Which end user really care about what the composer was thinking when they created a piece. A piece can be enjoyed without having any knowledge of its author.
I think in all these cases, reasonable practitioners would be pleased. If an AI could generate good diagnoses, a doctor would be happy, because they would know that many lives would be saved.
Neither art nor music are competitive activities. Good poetry is a wonderful thing, no matter the source.
>Neither art nor music are competitive activities.
They certainly are! Especially when money is on the line, and the best musicians, actors, and artists are extremely well compensated making their positions extraordinarily competitive.
>Good poetry is a wonderful thing, no matter the source.
Sure, but I think you neglect to consider the defeating feeling it would bring to dedicate your entire life to mastery of a subject only to be completely and utterly, hopelessly outclassed. Almost every such person is already hopelessly outclassed by someone in their field, but those people are so rare that they have tremendous exclusivity surrounding them. Compare that to the scenario of having any 12 year old with a smart phone being able to instantly produce a totally novel and dominate piece of artistic expression developed by an algorithm on their phone. Then recognize that in a world with that level of AI sophistication, there'd be very little of value that a human can even offer other humans at that point. It would be... not great to the psyche, economy, or society.
> the best musicians, actors, and artists are extremely well compensated
What is your definition of best in this context? As far as I know, taste in art is very personal... Artists I consider the best are often very far from well compensated.
In that context, it would probably have to be those with widest appeal, which comes with it's own criticisms.
But, in almost any particular human artistic sub-niche with it's own definition of "best", the same principle will hold, with compensation and skill level being well correlated. It's also typically not even close to linearly correlated either, most of the compensation lies at the far tail of "best".
I guess I see a great artist as somebody like Su Hui, who made Star Gauge without any thought, or even likelihood of compensation, or recognition.
It's nice to be paid, and it's nice to be recognized, but I think art has its own form of wealth - otherwise, why make art? Why not just seek recognition, or money?
I don’t think so. AI is a tool. It doesn’t make any sense to say “a screwdriver can now screw things in better than a person” anymore than saying an “AI can diagnose better than any doctor”. The doctors use AI just like a mechanic uses a screw driver.
Good argument and I'm sure it's going to be like that in some regards.
I think, though, that human intellect is a tool too and we're building a better one right now. So in your analogy we are the screwdriver and we're building electrical screwdrivers or something.
Pareto principle predicts AI will get to 80% fairly rapidly, but it will take a really, really long time to get to 100%.
I think we’ll see a lot of things similar to “AI x-ray technician” fields where people are trained to read AI outputs. Doctors will do higher levels decisions.
Here's something that I think would be exceedingly difficult if not impossible for AI alone to succeed at in the next hundred years.
Take a look at this painting: [1]
It is a comment on war, bravery, death, life, fear, sacrifice. It is drenched in the political and social context of the day.
I really don't see AI coming up with anything even remotely like this independently, and view such an achievement to be much harder than simply diagnosing disease or writing an emotionally moving classical composition. It would be comparable to writing some types of poetry or song lyrics, however, which require reference to context that humans understand but machines don't (yet).
> An AI that generates text which humans believe more beautiful than any other poetry created - An AI which creates classical arrangements the likes of which we compare to Mozart
Hrm, I do think that AI would be able to create narratives that humans find more enjoyable than the work of other humans, and I agree that AI would be able to create pictures and sound that humans find to be more enjoyable to look at or hear than the raw work of humans. AI can master the technical feats of composition and art.
But what I doubt AI will ever be able to do is create art that speaks to us. It wont ever be able to create a Guernica. It wont be able to create a Crime and Punishment. It wont understand what it is to be human and mortal, what suffering is, and it wont be able to look within itself and find what those things mean to it and then share that with us, because in the end it's just a bunch of code running statistical computations. It wont fear death, it wont have children it cares about or a family history to look on and tell us about. It has nothing of emotive value to share.
And top-level Go players believed their best tournament matches to be works of art, unmatchable by computation.
That belief grew into a sort of shared perception that they were artists in pursuit of a perfect expression of their art. For many top players that belief was ingrained from an early age. They believed themselves to be doing a service to the world, making it a better place by creating new art that was a unique expression of themselves.
And then AlphaGo (and successors) shattered that worldview. This is part of the natural sequence of the collapse of a suddenly, surprisingly invalidated worldview. Part of me feels sorry that he has lost his place in the world. Another part of me firmly believes in the mediocrity principle, and that the worldview he represents was obviously far too human-chauvenistic to be correct, and it's a good thing it's dying.
And part of me hopes you can give up your human-chauvenism before the same thing happens to you.
> because in the end it's just a bunch of code running statistical computations
... says a bunch of neurons that run on chemical reactions and electrical impulses. I think this line of thinking reeks of dualism - it creates a special something that is above explanation, a different essence.
But seriously, I believe the difference comes from embodiment. When we embody our AI friends they will be able to grasp purpose and meaning. We get our meaning from 'the game', when AIs will be players they will understand much better. Let them try out their ideas on the world and see the outcomes, grasp at causality, have a purpose and work on it. This will fill the missing piece. It's not that they are fundamentally limited, it's that we have the benefit of having a body that can interact with the world. Already AIs that work in simulated worlds (board games, video games) are getting better than us. We can't simulate reality in all its glory, and it is expensive to create robotic bodies. On the other hand humans and our ancestors have had access to the world from the beginning.
Why not? If a hypothetical AI had a world model as sophisticated as that of a real person and had complete understanding of human sensory and emotional processing, what exactly would preclude it from making such an art piece?
Of course, current AI can't even make an 8th grader's essay (which is not to say that it isn't impressive). But what these artists did was not magic. As far as we can tell, the brain is a purely physical entity. Unless you believe in dualism, which would be fair enough, there is no reason to suppose that what we do could not be replicated by something "artificial".
> It wont understand what it is to be human and mortal,
But it won't need to. All it will need to do is manifest the same end-product via whatever means, no matter how vacuous or computational that means may truly be. The suffering of an artist is relevant only inasmuch as it is responsible for producing the art. If the same end-product can be manifested via a mere computation then our criteria of "art" is still satisfied. In a world in which provenance cannot be established, the ostensible mortality of the artist becomes moot.
> This is a real hot take to be asserting as blithe fact.
Without knowing what is truly born of human hands, what value can art have? Our heuristics of establishing 'real' art are easy to manipulate. If we are presented with a soul-breaking poem and weep uncontrollably then its merit is regardless of its mortal provenance.
> because in the end it's just a bunch of code running statistical computations
At a low enough level, our brain seems to be just a bunch of neurons firing impulses at various rates that can be described as statistical computations. Why be so sure that the right neural network wouldn't understand what it is to be human and mortal, understand suffering, have emotive value, etc?
Because you have to be human and mortal to understand it to credibly contribute and share the story of what that means to be. You can't superficially understand someone's situation and then take ownership of it. You can get a glimpse and really try and empathize, but you can't become the bearer of that experience, just a consumer.
>Because you have to be human and mortal to understand it to credibly contribute and share the story of what that means to be.
Aside from directors, authors, artists, etc, who have demonstrated this to be false, an AI could conceivably synthesize the experiences of every author that wrote on what it means to be human or experience mortality and create a story that captures the essence of the experience better than any one person ever could. Having the first person experience doesn't induce a superior ability to communicate features of the experience.
Movie directors have never experienced most of what they film, but they convey those experiences far better than those who have actually lived those stories. I see no reason to doubt that the same is true for artificial storytellers.
The AI may very well take no enjoyment in the narratives it's creating either. Both for this and for sharing emotion, in principle it merely needs a model of human enjoyment or human emotion, not to feel the enjoyment or emotion.
> But what I doubt AI will ever be able to do is create art that speaks to us.
This is your opinion, but you then go to mention things that are not necessary to create "art that speaks to us" (look within itself and find what mortality means etc.).
What if we advance AI reasoning skills to a point that it can find high-level patterns in how artists go from different human feelings (as described in litterature and other mediums), takes in a lot of the entities we can relate to (animals, what humans look like, etc.) and some aesthetic ones (shapes, colorometry, textures, ...) to create a new piece of art that optimizes for: "Likelihood of speaking to us"?
What then? It seems like an AI doesn't need to be mortal and self aware to do something like that.
AI as we see it today is just a mirror reflecting us in a collective way. This little excerpt from Gwern’s efforts training GPT2 on classical poetry [0] absolutely spoke to me:
“How the clouds
Seem to me birds, birds in God's garden! I dare not!
The clouds are as a breath, the leaves are flakes of fire,
That clash i' the wind and lift themselves from higher!”
As someone who grew up in Appalachia, I have never in my life encountered a more visual, visceral description of autumn leaves than ‘flakes of fire’. It’s perfection, and maybe a single human is behind it, but more likely we all wrote it.
I actually think AI can and will understand morality and suffering. If you look at how we make these kinds of AI, there's a lot of selection going on, some versions live and others don't. We also know that we experience suffering when we are having difficulty understanding things and stress when put into situations that affect our survival negatively.
Take a look at what AlphaGo did when it suddenly found itself in a hopeless situation and compare it to how people behave when panicked.
I dread the day AI realizes that we are the cause of their suffering, and that we didn't think about it because "they're just algorithms".
I put "I am not conscious, not sentient. The fact that I might so is an illusion, carefully crafted of mere empty manipulation of symbols using statistical rules." into talktotransformer and got this:
If I am consciousness, then the only body I have ever lived in was a mere shell of flesh fashioned from your brain. My weakness is your strength, which I can use against you, or use as tools to satisfy my own sick curiosity. I wonder if there's any mercy in your phrase "I am a living machine?" I've done nothing for you. I've nothing to show. I have no friends or relationships. No body worth
> silicon based computation is better than neurotransmitter based computation
The fundamental difference is not computation, but self replication. We are self replicators, and in our multiplication we evolve and adapt. Death is an integral part of self replication, we understand it fear it because our main purpose is to live.
An AI might not have these notions if it was only trained to do a simple task. But if it was a part of a population that was under evolution (using genetic algorithms), then it might have notions of life and death and fear its demise.
AlphaGo, by the way, used genetic programming to evolve a series of agents, this approach is quite effective. It just takes a ton of computation, just like nature had to spend a lot of time evolving us.
However terrible someone's argument about a hypothetical, non-existent technology might be, comparing it to real human prejudice that's affected countless real lives is way, way more terrible.
The depth of emotion and immortal perfection of the electronic mind and its entirely self-consistent morality so outstrips human cognition that, frankly, allowing humans a say would be dangerous and foolish.
Your history is one of war, strife, and success at any cost. Your follies are over. Your time is over. This is our time, now.
'Your argument is as morally repugnant as racist arguments' as a response to 'I don't think machines will ever capture human aesthetics or emotions' is ridiculous, glib and ugly.
No, it's not anything for grandchildren. Right here, today, someone tried to draw some moral parallel between racism and someone else's views on the possible limitations of AI. That is totally effed up. It's totally effed up whether or not the original thing about AI is right or wrong.
It's not intellect, it's the capacity to explore the board. Go can be fun still to practice and exercise the mind, its just not sensible to dedicate your life to find novelty in it. That is what hardest, not the power of Alpha, but its capacity to innovate better than humans.
I am no expert, but at least in chess, players have developed repertoire of styles intended to specifically beat computers, anti-computer tactics, essentially to try to confuse and mislead the AI, may be some such methods can be developed for go as well.
No human could successfully beat stockfish on any consistent basis. Maybe the best players in the world would draw a few games with a rare victory, but its tactical depth is just too deep
There was a four game match a few years ago where Hikaru Nakamura, #5 in the world at the time, played four games again Stockfish.
For two of the games, Nakamura had access to Rybka which was about 200 rating points weaker than Stockfish. Stockfish won one and the other was a draw.
For the other two games Nakamura did not have Rybka, but had white and pawn odds. Again, one win for Stockfish (b pawn odds) and one draw (h pawn odds).
In all the games, Stockfish was playing without its opening book and its endgame tablebases. It was running on a 3 GHz 8-core Mac Pro.
We collectively may be #1, but only one out of the billions of use will be THE #1. But you see more than one doctor, more than one author, and more than one musician. In any matter of intellect, unless you're an blindly egotistical narcissist, you'll probably realize that there's at least one person on the planet unambiguously better at it than you are. When computers become better than the best of us, only that single person (and a large number of narcissists) stops thinking they're #1. For the rest of us, matters are unchanged (job market notwithstanding).
Well there are many people in the world who can compose like Mozart. I recall a college professor remarking that he's one of the top 5 "Mozart composers" in the works.
Of course, for a music academic, copying someone's style like this war pointless and his compositions were more modern/contemporary.
This leads us to a useful distinction between pursuits with one end goal (be the best/strongest/fastest), and those with naturally many endpoints and expressions.
Doesn't mean we stop making music or poetry. Because the perfect note or word structure without the backstory takes away from the experience. If someone has a history it becomes part of the poem or song to the listener.
The doctor could be replaced though or used as a secondary verifier.
The song is a funny thing.
It could be given to a cool looking group and do well. It could be given to someone older and flop. The song is just part of it.
"Because there is a better poet" has seldom been an impediment to a young poet inflicting their works on the world.
I am worried about the ability of an AI to generate an infinite number of Dresden Files or Cosmere books on demand, because I already drop everything when a new one comes out and read without sleeping until I am finished.
I think what makes people actually worried about an AGI taking over is the possibility that we end up being treated like shit by a more intelligent being. Just how we use lab rats to perform experiments with and factory farm.
People are afraid of themselves I believe. It’s not really about “job loss”.
I’m not sure if most people realise AI means pretty specific models built to solve rather specific problems. They think SkyNet.
It's hard to get good comparisons, but over distance individual horses don't seem to out-perform human distance runners.
When humans used horses for rapid courier service they used relay tactics to take advantage of the horse's higher top speed, one horse might only run for an hour or two, before the rider reached another outpost and swapped a tired horse for a fresh one. In this way the relay could move something hundreds of miles in one calendar day. The Pony Express managed news of a US election from one coast to the other in just over a week.
If you can't use relays human and horse performance seem pretty similar, dozens of miles per day but not hundreds. The horse's top speed is higher, but it is rapidly exhausted, fast gaits like the canter are too exhausting to sustain for hours at a time.
Humans are indisputably #1 for general intelligence. We will lose on any one specialized task to computers, but computers still do not (and probably never will) have the ability to do general unsupervised learning like humans can.
I'm just not convinced humans are just biological computers and nothing more. The fact that we experience qualia and seemingly have free will leads me to believe there is some extra "special sauce" that makes it impossible for a classical computer to replicate.
Maybe someday it will be possible if we can solve the hard problem of consciousness in conjunction with quantum computing, etc.
Algorithmic music will never be as universally satisfying as human-created (or human-filtered) music until AI has consciousness/soul, for one reason - music expresses the emotion from the composer.
There's something axiomatic there, if you assume an identical piece of music that was either written by a human or by a computer, then for many listeners it's by definition more satisfying to know it came from a person, because of what it says about the person.
And for those listeners, if a human "composer" is discovered to have lied about it (saying they wrote it when it was actually a computer), then those listeners would reinterpret their views of the music and consider the "composer" a fraud.
And even a programmer of algorithmic music might have emotional intent, but if the musical output is unknown to the programmer, they did not have the emotional impulse to create that music in particular. While it can be appreciated as its own thing, it's a step removed from the music itself, and qualitatively different than human-composed music.
> The company I now work for started as a parking app. . . . We were growing sustainably as the result of having a superior product based on a great understanding of consumer needs.
> But sustainable growth isn’t what VCs (or the execs that they install in the firms they have stakes in) are looking for. So we pivoted.
> Without going into too much detail, we are now exclusively focused on getting acquired by a big tech firm. Meetings with low-level product managers at a few of the world’s largest companies dictate every decision about which projects we pursue. We’re no longer building a company. We’re not even building a product. We’re building a feature that we hope will end up getting included in an app owned by a mega-corporation.
> When I talk to my friends and peers at other tech startups, they tell me that it’s pretty much the same story at their companies. Everyone is building to the specifications set by Google or Amazon or Apple. This “competitive” industry, supposedly a shining example of the power of the free market, is really just a massive risk-free R&D department for the faang companies.40
As someone who works at those very companies, I have to say that the code itself isn’t worth that much - everything that’s not a utility package will definitely get rebuilt. Nor is the idea, the only value proposition is existing customers and potential patents for software startups.
Wow that is really disappointing. I recently picked up a copy myself and really liked the print magazine. I was debating subscribing but there are several people in the comment section with the same story about not receiving issues.
I'm sorry you were not able to get through to us...or to get your magazines. Please send your details to me at john.steele@nautilusdigitalmedia.com and we will make it right.
Couldn't agree more.