Setting The Stage
Today, we start with a short translation: “AI” is the abbreviation for artificial intelligence. Now that we all know that, it’s worth asking: what does that term even mean? I ask because the term confuses me, mostly for philosophical reasons.
Although I understand that there are different kinds of intelligence, I’m not sure that any of us are qualified to decide what separates one kind from another. For example: if my child - who is a toddler - says something wise or powerful, I don’t refer to his statement as “artificial”. I smile, am amazed, and I marvel at his abilities. Hugs usually follow.
But I can also feel the same amazement and wonder when a technology impresses me after doing something wise or powerful. Wait: you’re telling me that I can talk to my smartphone?!? And that it will listen and respond with the information that I seek?!? That’s amazing! Not enough to earn hugs… but still: amazing.
Therefore, calling certain kinds of intelligence “artificial” seems wrong for a few reasons:
It’s derogatory. If a computer provides me with information that I want or need, then calling it anything but useful seems insulting. “Well, you know, it’s not REAL intelligence, so I can’t really praise it…” Yeh, I call bullshit on that.
It’s misleading. For me, intelligence is intelligence. Humans are a part of nature. Therefore our intelligence is also part of nature. If humans invent a technology that provides intelligence — like, say, a computer — then that, too, is natural. Calling something “artificial” suggests that it belongs in a different category.
The titles and definitions here are, honestly, secondary. They take us away from the deeper questions that society should be constantly asking when it comes to how and when we implement technology:
What are the tests that determine that a technology is safe?
Who designs those tests? And how often are they reviewed and updated?
Who decides how technology should be used, once it’s public?
What are the ethical guidelines that are implemented as safeguards to prevent the misuse of technology?
Who is tasked with creating, reviewing, and refining those guidelines?
What’s now clear is that certain types of technology have eclipsed human abilities. Additionally, we now live at a time when it’s become increasingly difficult to keep track of every new technology that’s made public. As a result of lax laws and ethics, matters of security and privacy are rarely discussed with the public prior to the release of new kinds of AI.
Those kinds of conversations, sadly, only happen later. Today is one of those times. So grab a cup of whatever makes you focus: we’re about to discuss two of the newest - and most popular - AI tools, Lensa and Chat GPT.
Lensa AI: the AI Photo Editor
Lensa (also called Lensa AI) is a unique photo-editing application that’s become very popular on social media platforms. It’s an application that allows users to create realistic and psychedelic “magic avatars” of themselves when they upload photos onto the company’s servers. The images are captivating, incredibly fascinating, and - depending on the filter used - incredibly psychedelic.
Unfortunately, Lensa has run into some copyright and ethical issues.
The Awesome?
Magic Avatars
Lensa’s built-in “magic avatar” feature has become a favorite and continues to make the app popular and exciting. Using Lensa, users can create custom, digital avatars from their uploaded selfies. The process can take as long as 20 minutes. After uploading photos of themselves, users can choose to download their magic avatars using a variety of artistic backgrounds and styles such as “Sci-Fi”, “cyborg”, “superhero” and others.
As of January 2023, for $6, $8, and $12, users can create and download 52, 100, or 200 AI-generated magic avatars respectively. Users can also choose to pay $100 for an entire year of use, which is sometimes discounted to $36.
For those who are not seeking magic avatars, Lensa is also a photo and video editor. It combines some nifty filters for photos and videos, much as Instagram does. Users can upload and use three files in total for free to experiment with this part of the application. After that, Lensa charges either $36/year, $5/month, or $3/week for its service.
But make no mistake, the star of Lensa’s show is the magic avatars. They continue to attract more users who are fascinated by the photos and are willing to pay a fee to grab some of their own. Lensa, in fact, generated over $8.2 million within five days of it launching the new magic avatar feature. That feature is also what caused Lensa to hit all-time high downloads in the various app stores.
In 2022, the platform made over $16 million in revenue and $8 million in December alone. Currently, Lensa has over 1.2 million users, and the app has had more than 17 million downloads as of Jan 2023. So it’s popular, very popular.
But behind that popularity lurks some ethical and legal problems.
The Not-So-Awesome
Copyright Infringement
A number of artists have claimed that Lensa is processing and manipulating stolen artwork to render “original” avatars for users. In some cases, that artwork even contains the signatures of the original artist.
So, you know: not great.
Prisma Labs, the company that makes Lensa, denies the allegations that their app replicates or steals unique styles and then mimics artists' work. But the artists seem to have a legitimate claim: Lensa is built on technologies that rely on a database of billions of photographs and paintings to train itself. Sadly, it seems that those images were scraped from the Internet without asking for permission from the artists themselves.
Anyone can search that database here. It includes the artwork of millions of artists, none of whom have been paid for their work. It is, therefore, unclear how Prisma Labs can defend its position.
Image Privacy
Lensa AI saves uploaded images within the app for training purposes. This way, it can improve its generative system over time. Privacy advocates are concerned with the company’s data collection and storage practices. Some users believe that the app retains uploaded data even after generating images and may be using it for third-party use cases.
PrismaAI claims this is not the case. They state that all user photos which are uploaded are deleted after magic avatars are created. The company argues in the Twitter thread linked here that AI-generated images can never replace talented artists and that it is simply an “assisting tool” for average users to render AI images.
But didn’t end the debate: it started it.
“The courts have been consistent in finding that non-human expression is ineligible for copyright protection.”
The technology - groundbreaking though it may be - is sure to create a copyright nightmare. The US Copyright Office stated that AI-rendered images do not fall under copyright law. But what happens if a human uploads a trademarked piece of human art that IS covered by copyright onto Lensa?
PrismaAI states in its terms and conditions that users “may not upload, edit, create, store or share any User Content that may infringe, misappropriate or violate any patent, trademark, trade secret, copyright or other intellectual or proprietary rights of any person.”
While that sounds lovely, there are no checkpoints that Lensa implements to actually prevent that from happening. Instead, it passes on the responsibility to the user.
Data Privacy
Like other apps, Lensa collects and, sometimes, shares the data of its users. The company is upfront about this in its privacy policies. Notably, however, the company doesn’t automatically opt its users out of data collection and processing. It, once again, passes that responsibility on to the user.
On December 12, 2022, the website CNET warned the public about Lensa, highlighting that the application’s terms and conditions webpage granted Lensa a "perpetual, irrevocable, nonexclusive, fully-paid, royalty-free, and transferable license.”
Well, that sure sounds controlling, doesn’t it?!?
As of December 15, 2022 - just three days after that article was published - that wording disappeared from the company’s new terms and conditions. It is now replaced with language stating that users grant Lensa “a time-limited, revocable, non-exclusive, royalty-free, worldwide, fully-paid, transferable, sub-licensable license to use, reproduce, modify, distribute, create derivative works of your User Content, without any additional compensation to you and always subject to your additional explicit consent for such use where required by applicable law and as stated in our Privacy Policy.” (emphasis is mine to point out the changes)
To their credit, Lensa left the previous — and I’d say damning version — of their previous terms and conditions here.
Coincidence?
ChatGPT: A Revolutionary New AI Chat Bot
ChatGPT is a sophisticated application that interacts and engages in human-like conversation. Launched by the company OpenAI in November 2022, it drew millions of users after its launch for its lifelike interactions. It not only provides answers and data remarkably fast, but it also understands context, emotions, and human psychology.
It does this by using algorithms that mimic human conversation. Only, it’s not human: it’s an advanced bot (short for robot) that’s designed to communicate as humans would. This allows users to communicate with ChatGPT just as they’d talk to another human being.
The “GPT” in the technology’s name stands for Generative Pre-training Transformer. That’s a mouthful. It’s a fancy term that means the software is able to generate such human-like responses because it was trained - prior to its launch - by being fed massive reams of human-generated data from the Internet. Training for ChatGPT included two, important components:
“Feeding” ChatGPT many conversations between humans
Feedback from humans on how well (or not) ChatGPT reacted in previous conversations
These two types of training are known as “supervised” and “reinforced”. Notably, ChatGPT is programmed to continue learning from users which now number in the millions. This likely means that ChatGPT will improve exponentially over time.
What’s Awesome?
Most interactive chatbots have little or no ability to recognize context, emotions, or subtext. I, therefore, decided to engage with some of them on a topic that was on my mind earlier in the day: the weather. Here are screenshots from my conversations with two chatbots that rank high on Google: Cleverbot and Chat with AI:
Turns out “Cleverbot” (at left) isn’t very… clever. A basic understanding of the topic at hand — sadness from the rain, in this example — is missing. In the case of Chat with AI (at right), the results were even more laughable: my sexual orientation was questioned randomly and, in my case, incorrectly.
By comparison, ChatGPT generates human-like and natural responses to a variety of many questions and topics. More remarkably, ChatGPT can keep the context and essence of a conversation. Its technology can utilize earlier parts of a conversation to render coherent and suitable responses. And it sure does seem to understand emotional and psychological content:
This kind of immediate understanding, the ability to find active suggestions for a problem, and a sense of humor are not only remarkable but captivating. It also opens the very real possibility that the technology can be used not only to help people but to also allow them to cheat.
It’s important to note: not only can users have interactive conversations with ChatGPT, but it can also perform different language tasks like fetching, summarizing, composing, or even translating various kinds of data.
Perhaps that’s why ChatGPT is getting so much press.
What’s Not-So-Awesome?
One of the limitations of ChatGPT is that it often renders believable but false answers. However, there are other, deeper issues with the software that should be understood and discussed.
Limited Knowledge of Current Events
Want updates on your stock portfolio? What about the latest news from the front lines in Ukraine? Or emerging news about the next five politicians who discovered classified documents in their homes? Sorry, but that data isn’t available in ChatGPT. That’s because its knowledge base is restricted to events that happened prior to 2022. It’s possible, of course, that as the technology begins to earn a profit for its parent company - OpenAI - that more current information will be “fed” to the platform. For now, that’s not the case.
Human Bias
It stands to reason: if ChatGPT involves supervised training from humans — and if humans are biased — then ChatGPT will be as well. Some experts now accuse the platform of having a Left-leaning bias, with some Conservatives calling the platform “woke”. Others suggest that issues of racism still need to be addressed on the platform — and in AI overall — in far deeper ways.
Ethical and Legal Issues
Plagiarism is big deal in the world of art and education. But as the digital world eclipses the analog world, more and more tools are emerging to do the work of humans. If a bot like ChatGPT can communicate as effectively as a human, wouldn’t that be attractive to students who procrastinated until, say, the last minute to write their essay on Aristotle?
ChatGPT wrote me this 800-word essay on the major contributions of Aristotle in about… 45 seconds.
Did I mention that ChatGPT passed a graduate exam at The Wharton School of Business at my alma mater, The University of Pennsylvania? Or that it passed a law school exam at the Univeristy of Minnesota Law School?
Pardon the pun, but the bar is now set. 😂
Perhaps this is why the NYC Public School system banned ChatGPT from its networks. Or why educators around the globe are now having conversations about how to deal with what they assume will be an onslaught of virtual plagiarism by AI.
Final Thoughts
Artificial intelligence and the apps built on them will continue to be in the spotlight for the foreseeable future. It’s not an exaggeration to say that they are changing the way people interact with technology. And with each other.
Lensa and ChatGPT offer incredible advances in two popular fields. But there are security, privacy, and ethical concerns. Consumers should understand that AI is still an emerging and improving field. It will never be perfect.
Additionally, it’s worth reminding everyone that corporations are just like us people: some have excellent ethics and morals; others do not. And, because all corporations are beholden to their shareholders before the public, patience is strongly encouraged.
So is caution.
Users can and should learn about a company's track record before uploading their personal data onto any company's servers. Ditto for reading all of those boring terms and conditions web pages. While it might take time, we should learn and understand all of the critical information that a company is required to disclose to the public.
Only then, once we’re prepared and educated, can we have some fun and play with the newest and coolest tools that are available. I don’t know about you, but I totally want to see myself as an AI Cowboy Space Alien in Lensa. Eventually. For now, I’ve got more reading to do.
And that’s a wrap for today’s episode, everyone. Thanks for being a part of our community and, as always… surf safe! 👍🏼 👌🏾
Popular Past Issues:
Which secure routers to purchase and WHY.
My recommendations on the best VPN providers.
My favorite, free tool to keep email addresses private.
A crash course on keeping your devices updated.
Our Current Recommendations
David’s easy-to-read book on home tech: “Screw The Cable Company!”
The online backup software I use: iDrive (affiliate link)
The VPN software that I use: Nord VPN (affiliate link)
The email anonymizer that I use: 33Mail (affiliate link)
The secure router I use at my office: Gryphon (affiliate link)
The secure router I use at my home: Synology RT6600ax (affiliate link)
Please visit the TechTalk Product Recommendation page for more up-to-date picks for best-in-class software, hardware, and services. These are the very same products and services that we own and use ourselves.
Transparency Statement
Please read the TechTalk Transparency Statement to learn about our newsletter’s strict policies on linking to products and services that we recommend to our readers.
Share this post