Indeed. But people don't want "their vision". People want to say stuff like "I want a movie about a money counterfeiter, based on a true story". There are countless "visions" corresponding to this prompt. AI would choose one, based on its giant inscrutable matrices, deliver the movie, and the prompter would be satisfied.
If you are an artist, AI is pointless. If you are an art consumer, AI is the best thing since sliced bread.
> deliver the movie, and the prompter would be satisfied.
But only if the movie is actually good. Think of how the hallucinations will accumulate with longer content. The movies will have so many inconsistencies. How many AI movies will people watch after watching a few that have missing endings, or important characters just disappear from the plot, etc.?
And there will be no IMBD for such movies to check reviews first. You have to gamble with your valuable time.
Additionally, how many people now sit and generate images all day only for themselves? Although some might, I doubt it is significant. Most generate them to share them. But almost nobody is going to be interested in consuming random movies posted by random people in the same way they do images.
If a generator produces movies with "missing endings, or important characters just disappear from the plot", no one will use it, and the company will be forced to improve it (or even not release it in the first place).
Like coding agents are now. A year ago, no one used them because of slop. Now, there is still slop, but they have become usable enough to be actively used in the industry.
Note that I am not a big fan of AI art. I think that since we are humans, there is a lot of intricate and intangible value in savoring art produced by a human. But I do not think the way forward is to pretend AI will be forever incapable of tricking us successfully.
> If a generator produces movies with "missing endings, or important characters just disappear from the plot", no one will use it, and the company will be forced to improve it (or even not release it in the first place).
I agree, but that's happening now with incoherent images and short videos. They are all over social media. We are flooded with very low quality content. But it still works for now because its purpose is only engagement long enough to capture your attention for the social media algo boost.
But people will have a lower tolerance for bad movie content. So it might not go far, except if people find a way to game the algorithms in the way music services are facing right now. Which means those awful AI movies will have lots of viewers and great reviews, because they are botted.
> Now, there is still slop, but they have become usable enough to be actively used in the industry
Only because they work in a formal domain, which allows verification tools to serve as a source of truth for the LLM to iterate over until it meets the goal. Open ended tasks that don't have right or wrong answers don't benefit from the advancements here.
Furthermore, the content is still very low quality. If people read the code, they would find no enjoyment in it. It is just that we can now brute force it to meet an end goal and what happens in between doesn't matter for some use cases. But it mostly will for content like film.
> But I do not think the way forward is to pretend AI will be forever incapable of tricking us successfully.
We have already passed that mark. I'm constantly being sent things people think are real that aren't. Nonetheless, some tasks will remain impractical, not because they can't be done, but because they aren't efficient or there aren't the right incentives.
In summary, with enough data and compute, AI can mimic almost anything to high fidelity. However, none of it is currently profitable. It doesn't pay for itself. In order to reach the goals many have in mind, it still needs to be orders of magnitude more efficient. Uncertainty over being able to provide the financing for the future power requirements is beginning to rise.
> There Is No Shortcut to Manifesting Your Vision
Indeed. But people don't want "their vision". People want to say stuff like "I want a movie about a money counterfeiter, based on a true story". There are countless "visions" corresponding to this prompt. AI would choose one, based on its giant inscrutable matrices, deliver the movie, and the prompter would be satisfied.
If you are an artist, AI is pointless. If you are an art consumer, AI is the best thing since sliced bread.
> deliver the movie, and the prompter would be satisfied.
But only if the movie is actually good. Think of how the hallucinations will accumulate with longer content. The movies will have so many inconsistencies. How many AI movies will people watch after watching a few that have missing endings, or important characters just disappear from the plot, etc.?
And there will be no IMBD for such movies to check reviews first. You have to gamble with your valuable time.
Additionally, how many people now sit and generate images all day only for themselves? Although some might, I doubt it is significant. Most generate them to share them. But almost nobody is going to be interested in consuming random movies posted by random people in the same way they do images.
You'll have ratings for movie generators.
If a generator produces movies with "missing endings, or important characters just disappear from the plot", no one will use it, and the company will be forced to improve it (or even not release it in the first place).
Like coding agents are now. A year ago, no one used them because of slop. Now, there is still slop, but they have become usable enough to be actively used in the industry.
Note that I am not a big fan of AI art. I think that since we are humans, there is a lot of intricate and intangible value in savoring art produced by a human. But I do not think the way forward is to pretend AI will be forever incapable of tricking us successfully.
> If a generator produces movies with "missing endings, or important characters just disappear from the plot", no one will use it, and the company will be forced to improve it (or even not release it in the first place).
I agree, but that's happening now with incoherent images and short videos. They are all over social media. We are flooded with very low quality content. But it still works for now because its purpose is only engagement long enough to capture your attention for the social media algo boost.
But people will have a lower tolerance for bad movie content. So it might not go far, except if people find a way to game the algorithms in the way music services are facing right now. Which means those awful AI movies will have lots of viewers and great reviews, because they are botted.
> Now, there is still slop, but they have become usable enough to be actively used in the industry
Only because they work in a formal domain, which allows verification tools to serve as a source of truth for the LLM to iterate over until it meets the goal. Open ended tasks that don't have right or wrong answers don't benefit from the advancements here.
Furthermore, the content is still very low quality. If people read the code, they would find no enjoyment in it. It is just that we can now brute force it to meet an end goal and what happens in between doesn't matter for some use cases. But it mostly will for content like film.
> But I do not think the way forward is to pretend AI will be forever incapable of tricking us successfully.
We have already passed that mark. I'm constantly being sent things people think are real that aren't. Nonetheless, some tasks will remain impractical, not because they can't be done, but because they aren't efficient or there aren't the right incentives.
In summary, with enough data and compute, AI can mimic almost anything to high fidelity. However, none of it is currently profitable. It doesn't pay for itself. In order to reach the goals many have in mind, it still needs to be orders of magnitude more efficient. Uncertainty over being able to provide the financing for the future power requirements is beginning to rise.