Insights

Circle graphic Circle graphic Circle graphic

‘Made with AI’… but what if it’s not?

In February 2024 Meta announced that it would be rolling out a suite of AI detection tools. As Meta explained in its newsroom, as AI-powered content generation tools get more sophisticated, it’s becoming harder for consumers of content to tell the difference between what’s been generated by AI and what has not. By clearly and correctly labelling content which is created using AI, Meta hopes to build user trust.

Since AI tools became cheap, user friendly and easily accessible, there has been a well-documented rise in public scepticism and mistrust. Fake news and deepfakes making their way into public consciousness. Research published in August 2023 suggested that 30% of the global population was aware of the concept of deepfakes. The same research conducted a year earlier found only 13% knew what the word meant. As this interesting thought-piece from the European Parliament suggests “Simply knowing that deepfakes exist can be enough to undermine our confidence in all media representations, and make us doubt the authenticity of everything we see and hear online.” 

Undermined Confidence

It’s this ‘undermined confidence’ in platforms which Meta is trying to address with its AI detection labels. But getting the labelling right is harder than Meta imagined. It’s easy for Meta to detect AI generated content created using its own software, but now there are thousands of different AI tools out there used either to create images and videos from scratch or manipulate existing visuals. In the last few weeks content creators on Instagram and Facebook have noticed many of their photographs and videos had been mislabelled by Meta as ‘Made with AI’, when they weren’t. Expectedly, there’s been an uproar, with some influencers going so far as to boycott Instagram.

Here’s why Meta’s probably struggling to get the label right. To properly detect if an image has been created by AI, detection programmes can’t rely on the “look” of an image. Detection relies on programs to be able to read metadata or invisible markers which are embedded within the file. What we hope, is that software like Dall-E, Midjourney and others all embed some metadata into AI generated content to mark it as such. However, there is no government legislation or industry standard – thus far – which makes the embedding of this metadata or AI markers mandatory.

Dangerous Assumptions

Regulation is being spoken about. The EU AI Act, for example, prescribes some stringent record keeping and logging of materials produced by high-risk AI applications. But how these regulations are adopted and enforced is still to be seen. For now, tech companies seem to be moving quicker than regulation is and doing their own thing.

Meta claims to be developing its AI detection standards alongside other industry players like Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock. What Meta assumes, and needs, for its detection programme to be successful is that other industry players will comply. It also assumes tech companies will act in good faith, and all agree on the ethics of disclosure. Which is a big, and dangerous assumption to make.

Meta’s mislabelling of content seems to have exposed the problem. As a Meta spokesperson explained “We rely on industry-standard indicators that other companies include in content from their tools, so we’re actively working with these companies to improve the process so our labeling approach matches our intent.”

Blurring lines

Meta’s mislabelling also exposed another problem with AI – one of blurring lines between what is and isn’t AI. Many photographers noticed that Meta was ascribing the ‘Made by AI’ label to images edited using some of Adobe’s tools like Photoshop. Photoshop is used to remove a bit of garbage on the lawn in an otherwise flawless frame of a bride and groom, or to make a sunset look more dramatic than it was. But it can also be used to make waistlines look slimmer, lips fuller and skin smoother – which brings a range of other ethical issues and mental health concerns.

But increasingly, tools like Photoshop have AI integrations. Here’s an example: instead of manually scrubbing out the garbage, you can use a text prompt to “tell” Photoshop how you want the image edited, and it will interpret your verbal prompt to remove garbage for you. So, you might end up with the same output as if you had scrubbed out the garbage yourself, but your image now has a tiny bit of code which says to Meta’s AI detector that it has been ‘Made with AI’.

Tiny Artificial Intelligence integrations are making their way into tools we use day to day. This is not new. Microsoft Word, on which I am currently penning my thoughts, launched its ‘Editor’ function way back in 2016. This is an AI-powered service which performs spell checks and recommends grammar corrections. Even though this article is a product of my very human brain, I’ve relied on Word to correct a few typos for me. So, is it ‘Made with AI?’ All in the eyes of the beholder or in the eyes of the AI detector.

Some notes:

  • Meta has acknowledged it is trying to resolve the mislabelling of images.
  • The ‘Made with AI’ label is currently only visible on the mobile app; not desktop.
Posted in Uncategorised

Hawthorn Hosts an AI Panel on the Opportunities and Risks for the Creative & News Industries

Artificial Intelligence (AI) has emerged as a prominent topic of discussion in various spheres, including board rooms, government departments, and regulatory offices.

Yesterday, Hawthorn organised a private breakfast panel, moderated by Emily Sheffield, that brought together leaders from the media and creative industries, government officials, and regulators. The objective of the event was to explore effective strategies for harnessing the advantages of AI while addressing potential risks.

We’re particularly grateful to our esteemed panellists who contributed their valuable insights: Stephan Pretorius, Global Chief Technology Officer for WPP plc; Sophie Jones, Chief Executive Officer at British Phonographic Industry (BPI); and Baroness Tina Stowell, Chair of the Lords Communications & Digital Committee.

Thanks to: Department for Culture, Media and Sport | Ofcom | Competition and Markets Authority | 10 Downing Street

Posted in UncategorisedTagged in , ,

Like me, but keep it professional: Twenty years of LinkedIn

Three very important things happened to the internet in the year 2003, exactly 20 years ago.

Skype was launched, introducing us to the wonderful world of internet-based video conferences. WordPress was launched, giving rise to the phenomenon of bloggers, the pre-cursor to the influencer. And LinkedIn was launched, bringing the world of performative online social media networking into a professional setting.

This Holy tech-trinity would go on to change the way we work. To take stock of where we are today; the average office worker spends about 21.5 hours per week in online meetings, 810 million websites are built using WordPress and LinkedIn has 930 million members.

Skype, WordPress, and LinkedIn were a part of the rising Web 2.0 tide, which on the surface put power into the hands of the people, giving Joe Bloggs the ability to ‘influence’ the world with his own content.

For a while Joe really enjoyed having that “power”; he could start a blog about his niche interest in carnivorous plants, connect with other appreciators of the Venus Flytrap, and attend talks by Flytrap experts based in South Carolina from the comfort of his own home in Hackney. Increasingly though, Joe Bloggs would become aware that his well-crafted LinkedIn post announcing his new role at the Royal Horticulture Society was at the mercy of the algorithm, which was at the mercy of a big tech company which controlled who, when, how and how many times his network saw his post. Joe and the rest of us were rudely shaken out of our Web 2.0 dream of digital democracy.

Late last year, LinkedIn raised a few eyebrows when it became public that the company had been running a five-year long experiment on 20 million of its users. The study was an A/B test on LinkedIn’s ‘People you may know’ feature, where half the subjects were recommended strong connections i.e., people who they had a lot of mutual connections with. While the others were recommended to connect with those further outside of their existing network. LinkedIn wanted to understand the impact of the ‘People you may know’ feature on users’ ability to find jobs on LinkedIn. As an aside, the study found that the likelihood of you finding work via LinkedIn is greater if you connect with people who you don’t have as many mutual connections with.

When the findings of the study were released, there was outrage at the ethical implication of “experimenting” on people’s ability to find work. Arguably half the people in the A/B test were by inclusion in the experiment, less likely to find work than the others. But here’s the thing, LinkedIn and a vast majority of the internet is built on proprietary algorithms which are constantly being tested and tweaked based on our use of them. We just don’t think of this constant data gathering and feedback loop as an “experiment”; but it’s something we sign up for in the small print.

Just because LinkedIn is a professional network, which is designed to help users navigate job hunts and build a career does not make it any more or less virtuous to every other digital product that keeps us hooked to a screen, commodifies our content and monetises our data.

The outrage at LinkedIn’s experiment came from the realisation that an algorithm change could impact one’s livelihood. Meanwhile Meta and Twitter have been playing with outrage, addiction, and all manner of base human instinct. This hasn’t gone unnoticed and we’ve started having increasingly nuanced discussions about the potentially harmful impact of social media algorithms and the messiness that comes with unregulated tech development.

For a long time, LinkedIn felt like a clean, professional space. Where you could leave behind the messiness of your real life, stepping over the screaming kids and piles of laundry, into an ironed suit and the polished world of work. But then we started noticing the rise of the LinkedIn influencer – savvy users who have figured out clever ways to write posts which inspire reactions and arguments in the comments.

The growth of LinkedIn and associated work-enabling technologies over the last twenty years have taken us slowly towards a world where it’s harder and harder to separate the screaming kids from the boardroom. Remember the kid who walked into her dad’s BBC interview during the pandemic? Skype made that universally joy-inducing moment possible.

A host of challenges and opportunities have come with this merging of worlds; we’re discovering more of each as we go along, all semi-aware yet unable to escape from the experiment. Question is, do you really want to? Or is the thrill of the ‘like’ all worth it?

But before you answer that question, go share my article on LinkedIn.

By Salonee Gadgil, Digital Associate Director

Posted in UncategorisedTagged in , ,

Yes, you’re #Instafamous, but are you “AI famous”?

With AI chat platforms and AI art generators such as HotPot at our fingertips, the vocabulary of Artificial Intelligence is fast-making its way into our collective lexicon. It feels like an important moment in time, similar to when Snapchat AR filters first appeared on our smartphones revolutionising the way we capture and share moments by integrating digital animations, 3D objects, and special effects into our real-world photos and videos. That happened 7 years ago in case you were wondering.

It took just a few months for digital filters to become mundane, making their way off Snapchat and everywhere else. But what we didn’t know at the time, was that dog ears were the playful gateway drug of filters, that led us down a sinister, addictive path. The tech evolved from cartoonish filters that were easy to identify, to the more insidious beauty filters designed to be disguised. What followed were face-tuning, contouring, and lip filling AR filters, which are both a cause and effect of image-first social media channels. It’s hard to decide which came first, the filler or the Kardashian?

Either, augmentation influenced reality, i.e., repeated exposure to filters influenced people into doing their make-up a certain way, leading to younger and younger people undertaking plastic surgery to change the way they look – there’s even a word for this phenomenon; ‘Snapchat Dysmorphia’. Or, reality influenced augmentation, i.e., influencers doing their make-up a certain way and getting cosmetic treatments normalised a way of looking and being, which was then mimicked by the AR filters.

This cycle where cause and effect sort of chase each other around in an ever-dangerous whirlpool continues to this day. We’re aware of the dangers of course, and some of us are trying to row against the current, putting rules and regulations in place to preserve the fun, and reduce the fillers.

With time, we’re likely to see the same happen with AI. AI is a complex thing to understand, but in a nutshell, AIs are machines or software which learn how to mimic or “think” like human beings. They are doing this by combining large data sets and interpreting or organising this data. Most of the easily accessible AI tools get this data from the internet.

For now, AIs are operating with limited data sets, for instance, some tools have access to Google Images but not Instagram. But, we’re moving towards a world where more of our digital realities are connected. Which means the AIs of the future will learn about us from interpreting the things we post, share, publish, Tweet, Google, gram and more.

In the early days of the internet, search engines began organising information for us; people and brands are now only as “valuable” as how high they ranked in Google search results. So began the publishing of websites, blogs, and Wikipedia entries. With Web 2.0 and social media, if you didn’t post it, it didn’t happen. So, we began populating feeds with real-time updates about what we were eating, with who and where. What we didn’t know at the time, is that AIs would come along, and have all this data to feed on.

For brands and businesses playing the attention-grabbing game, perhaps going forward influence won’t be measured in how many people follow you on social media, or how high up you feature in Google. Instead, we’ll be typing, speaking, or thinking (yes, it will happen) questions into AIs, so they can teach us about the world. Remember how our parents grumbled about us moving from physical encyclopedias and libraries to Google? Soon we’ll be grumbling about our children refusing to Google things, but instead just taking the lazy route and asking AIs for the answer.

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam

Name Role

It’s only a matter of time before brands come to comms agencies asking not to appear on TV or the FT but asking to reach the AIs. Because really, in the long run AIs might control social, cultural, and political discourse. In the not-so-distant future, you may not matter, unless the AIs know who you are.

I, for one, can’t wait.

By Salonee Gadgil, Digital Associate Director

Posted in UncategorisedTagged in , ,

Time to activate your influencers…

It’s a tough time to be a journalist in the traditional media. According to the Reuters Digital News Report, only 15% of the UK population had used print news in the last week, down 7% on 2020. Alongside this, the numbers watching TV in the UK have dropped by 20% in the last seven years.

The future for the media will be very different, and this demands a dramatic rethink of corporate communications tactics. Already, corporates and leaders are losing the influence and trust they once held – outside of their business – due to a digital strategy of silence and an out-of-date approach to the way they speak to their stakeholders.

Shifting sands
The change we are seeing is fundamental. It isn’t just about how people get their news, it’s about whether they want news at all. In the latest Reuters Institute for Journalism Report, Nic Newman describes an ongoing “decline in interest in the news overall” as well as the channels delivering them. Many people are more likely to turn to the feeds of campaigners than the words of journalists to learn about the world.

That change has inverted the relationship that stakeholders have with corporates, executives, and their employees. Previously, tradtional media organisations, multinational brands and high-powered CEOs could be considered trusted authorities. Now, due to the almost limitless amount of information presented online, the power has switched from the company to the individuals within it.

Digital channels should be in the hands of your storytellers
Adapting the communications tactics doesn’t mean abandoning everything that has gone before. Your digital strategy is best placed alongside your traditional communications strategy.

Delivering a consistent message across platforms increases your authority amongst your stakeholders. But that doesn’t mean saying exactly the same thing across every channel; you can share your message in different forms on different channels.

For a long time, the job of digital communications was given to sales and marketing teams, overlooking the natural story-telling capabilities of the communications teams and agencies to run their digital newsrooms and channels. However, some leaders embraced the potential of a new digital approach early. Richard Branson’s use of digital set the tone for the likes of BP’s Bernard Looney. Looney uses his LinkedIn to promote company initiatives, praise staff and provide news on speaking engagements. This approach allows him to make an authentic connection and helps humanise the BP brand.

This approach is not an accident. Both individuals not only write posts themselves, but tie up with their company communications teams to ensure consistent, company-relevant content is uploaded at timely intervals, keeping the channel active and interesting. This allows for a consistent delivery of company key messages alongside personal updates that demonstrate their credentials as leaders.

Use you digital channels to create human connections
As consumers are more likely to “trust someone like us”, activating employees and leaders as influencers across your own (and their own) channels is critical to creating advocates.

Building influencers out of employees has delivered success for the likes of Walmart, which transformed 500 employees into influencers under their Spotlight initiative. This has seen the brand become a force on TikTok through its “Walmart Cheers” and “Walmart dance parties.” By giving a voice to its front-line associates, Walmart is humanizing its brand and offering customers authentic, relatable content that they actually want to see and engage with.

Brands and business leaders that build a rapport with their audiences stand a better chance of creating advocates. Building these advocates out of audiences, through a two-way conversation, will also increase crisis resilience, when an issue arises.

Digital natives are less likely to visit a news website, or be committed to impartial news… [they are] … more likely to say they use social media as their main source of news. Deeply networked, they have embraced new mobile networks like Instagram and TikTok for entertainment and distraction, to express their political rage – but also to tell their own stories in their own way.

Finally…
Few modern businesses ignore digital communications. Equally, relatively few make the very most of their true potential online. As traditional media suffers, and the way consumers learn about the world changes, communications strategies will have to change more radically than ever before.

Posted in UncategorisedTagged in , , ,