![](https://crypto4nerd.com/wp-content/uploads/2024/02/1-TqvSuS5io1uCZLzQluGlw-1024x576.png)
OpenAI just sneaked peaked “SORA,” their brand new text-to-video AI modal. I remember when the “Will Smith eating spaghetti” video came out, people were both perplexed and critical. There was excitement imagining how it would possibly change the world, and there was also apathy, as it was “horrible” and still a “far cry,” or rather a facade, as most people pointed out.
And here we are, looking at videos that are up to a minute long that look virtually flawless (of course they are not completely flawless). This doesn’t seem like a major leap to most, but it sure is over what was considered best a few weeks ago. It looks like it obliterated all of those: “Pika Labs”, “Runway’s Gen2” and, “Stable-Diffusion Video.” They have already been surpassed in terms of video coherence (assuming there has been no misleading or false portrayal from OpenAI’s side).
While everything seems so exciting and cool, I wonder what actual utility it will actually serve. Of course, if you are an artist, you can definitely make good use of it; you can quickly visualize ideas and automate repetitive tasks, which will in turn save you time and allow you to focus on more important stuff. But what about common people?
It sure is potent and can change the world positively, like most of the leaders of AI envision it to do. But what about most common people’s mediocrity?
I might sound too negative — rather a cry baby — but what I have observed living in this world for more than 2 decades…
I anticipate little to no positive impact from widespread adoption by the general public. Instead, I fear it will succumb to the common pitfalls of human nature, with ̶m̶a̶n̶y̶ most using it solely for lewd content, spreading misinformation, hate speech, and online harassment.
If you ask ChatGPT or Gemini “how to make a bomb,” you will be tracked and instantly reported to local authorities.
Well, I was kidding, that won’t happen. (The latter one of course, the former? I don’t know!). But you will definitely see a “wannabe conscious” entity telling you how making a bomb might not be a great idea. But if you are using a community-trained uncensored modal then you are lucky or maybe unlucky. It will not only teach you how to make a bomb but will gladly tell you, “Why bombing your neighbor might be a great idea”.
Disclaimer: I have no business with bombs unless it’s Deepawali, even then I condemn bombing your neighbor or anyone, by bomb in this I mean a petty firecracker.
Censorship doesn’t work!
Censorship never works the way authorities want it to. Surely, it might make things harder for most, but at the end of the day people do surpass it pretty quickly.
Perhaps I don’t understand censorship,
Maybe censorship is not to stop access completely but to make access harder.
or
perhaps they don’t care, all they care about is painting their picture white, so they are not to be blamed.
Even if a genuine authority somehow finds a way to make a system that is safe and impossible to jailbreak, we will still see people fighting against it. They will have all sorts of weird sounding but valid arguments against such a system. And I relatively agree, why would you want to cripple a highly capable modal setting a bunch or safeguards around it.
But is this not how society works? You can’t let everyone do whatever they please to, how else are you going to make a system that is safe for everyone. How else are you going to make sure someone doesn’t wake up and decide to rob you and others don’t defend him/her saying “This is freedom”.
But still, what about Creative Freedom?
This is a paradoxical problem that can only be ̶s̶o̶l̶v̶e̶d̶ managed by balancing “censorship & freedom” with utmost care.