AI brainwave

It wasn’t that long ago that Faster Networks wrote a whole piece about OpenAI’s test release of digital AI image generators, Dall-E and Dall-E 2. The language model software enables a user to write a description of an image they want to create and have that text converted into an artwork. Some of the artistic translations based on AI modelling were so effective they won high profile art competitions and created a buzz but also some controversy. Could AI art be measured against the commitment, talent and imagination of an artist with a brush? 

OpenAI was never going to rest on their laurels, this is what they do, they push the technological envelope. They research new forms of technology and when it gets to a certain standard they release it for OpenAI account holders to test, market and improve.  The release of ChatGPT actually comes off the back of previous versions that have recently been upgraded. According to OpenAI and their 13 minute read about ChatGPT, the language model has been trained by “human AI trainers [that] provided conversations in which they played both sides—the user and an AI assistant”.

After creating an account on OpenAI website, I asked ChatGPT to “write me a blog that outlined the benefits and concerns of AI language and conversations replacing human interaction” and I received an error message with a offer to retry, which returned the same error message. Boring! Now the writing has to come from my own research, imagination, content strategy, editing and publishing – actually time spent thinking and fine tuning a coveted skill – to convey a message articulately with a creative human edge, that can create a connection or stimulate another idea or conversation. Not surprisingly, Nick Cave had some things to say about ChatGPT writing a song in the vein of Nick Cave, sent from a fan “[this] is a grotesque mockery of what it means to be human”. He acknoweldged that he was taking it personally because he is very much in the middle of the songwriting process and it requires ALL of his humanity, “blood and guts” and vulnerability to write music that reflects the human experience. Not all music is that meaningful and not all music fans care that much about the lyrics of a song or the artist that wrote them.

One example provided by OpenAI themselves is a software developer asking ChatGPT to find the error in the code that they can’t figure out themselves. That’s a fantastic tool that saves time and inevitably money to an individual or business. Although, what’s the point of a software developer that can’t solve the problems of code gone wrong? And if the developer continues to ask AI to solve problems then what happens when a) AI can’t find the answer? b) the skills of the software developer disappear through lack of use? 

The novelty factor of ChatGPT is fading quick but inevitably we will move onto the next form of AI before we have had a real chance to understand it’s shortcomings or real benefits. Today in Tabs newsletter author, Rusty Foster, referred to Ted Chiang’s article in The New Yorker that described ChatGPT as a “blurry jpeg”. The data collected is all there but not necessarily put back in the right order so it kind of makes sense but isn’t as sharp because it is an “approximation”. ChatGPT has the entire internet at it’s fingertips to mine for new content but does the public want or need a shallow and confusing interpretation of the internet? No, thankyou. 

NB: none of this was written by AI software and in fact, was thoughtfully written, produced and published by a properly paid freelance content creator with an actual brain and lived experience.