Today we celebrate: International Romani Day

Does your text really resonate with people? This is how I use AI to study comprehensibility and resonance.

Can AI help assess the difficulty and suitability of a text? Yes, if you know how to ask it.

How can you be sure that your text will actually be remembered by the reader, and not just by you? Let’s be honest – anyone can be delighted with their own paragraph.

The problem: we write “in the dark”

If you work in communications, PR or marketing, you probably know this scenario: you create a text, polish the introduction, throw in an everyday-life example, hit “send”.

And then the feedback comes in:

– “Great, but maybe a bit shorter.”
– “Well written, but people won’t read this.”
– “Can you simplify it?”

Classic: lots of opinions, very little that’s concrete.

And I wanted something much more precise:

On what educational level will a reader understand this text without difficulty? Is the language really aimed at people in communications, and not at IT specialists? Am I by any chance writing more for myself than for the target group?

And this is where AI comes in – not as a “content generator”, but as a strict auditor.

What AI can extract from a text in two minutes

Instead of asking “is it OK?”, I started applying to texts something I provisionally called the Comprehension and Resonance Index. I’ll keep the text itself a secret for now – I’ll only reveal that it’s about our industry, and the author is… rather close to me.

In about two minutes, AI generated for me, for example, this fragment:

“FOG-PL (estimate): approx. 13–14 → corresponds to the end of high school or the beginning of university. Jasnopis (estimate): 4/7 → a text of medium difficulty, natural for informed recipients from the communications industry…”

So instead of “it reads well”, I got specifics:

  • difficulty level,
  • suggested audience that will have no problem absorbing the text,
  • a warning that people outside the industry may struggle.

In another case, the audit looked like this:

“The text is clear and well suited to an informed reader; you could only shorten the 2–3 longest sentences for full reading comfort.”

It sounds like a teammate who, instead of saying “it’s OK”, points out exactly what to improve.

Or:

“Jargon is used sparingly, fits the context well and is mostly explained. For this group – the level is very well judged.”

So it’s not just “is the text understandable”, but also: is the level of specialist vocabulary appropriate for people in communications, PR or marketing, who are not programmers but are already familiar with AI.

3 key layers that matter in communication

What I like most is not that AI “crunched some numbers”, but what exactly it counted and how it structured it.

The audit was divided into three levels:

  1. Linguistic comprehensibility – sentence length, degree of difficulty, whether the text doesn’t resemble a solid wall of words.
  2. Industry language – whether abbreviations like AI, CMS, “agent”, “assistant” are clear to a humanities graduate in marketing, and not only to IT.
  3. Value resonance – whether the text reflects the values the industry actually responds to: relief, safety, responsibility, realism instead of promises like “AI will do everything for you”.

At the end, AI combines this into a single indicator – something like a “weighted average” – and gives: “8.6 / 10 for communications, PR and marketing professionals”, with a note that what’s mainly missing is: shortening a few of the longest sentences, one concrete numerical example: how many hours per week I actually save, what exactly the agent does for me.

This is no longer “AI magic”. It’s concrete, statistical work that we usually don’t have time for.

Why it works – and why you can’t blindly trust the results

To be clear: this is not a single prompt like “rate the text on a scale of 1–10”. There’s a whole system running in the background, based on:

  • simulation of readability indexes adapted to Polish (FOG-PL, Jasnopis),
  • analysis of information density (how much concrete content vs. how many “empty calories”), analysis of tone and sentiment (whether the text is more educational or more salesy),
  • checking the fit to the target group.

This is not “whatever the model feels like”.

It’s a practical translation of research from linguistics, natural language processing and experience with LLMs/GPMs into a tool for people who write in Polish – for humans, not for academic reviewers.

But – and this is an important point – I treat this index as a trend indicator, not a final verdict. AI is great at counting, detecting patterns and pointing out problematic fragments. But it doesn’t know your brand the way you do, it doesn’t understand the political or industry context or internal realities, and it won’t take responsibility off your shoulders for every word you publish. That’s why for me it works like this: first AI analyses the text, then I decide what I accept and what I consciously leave as is.

All right, but what does this mean for a humanities person?

If you work in communications, PR or marketing, you’ve probably seen AI so far mainly in presentations about the “revolution”. So did I. Only when I did solid research, wrote out the methodology, “trained” an agent on Polish texts and ran my own materials through it did it start to make sense.

Under three conditions:

  • you treat AI as an analytical tool, not a magical source of truth,
  • you have the patience to spend a few hours once on setting up a good process,
  • you treat the results as decision support, not an excuse (“that’s what AI came up with, so it’s not my fault”).

For me, the “Comprehension and Resonance Index” has simply become another stage in working on a text. Just like we used to ask a colleague: “read this with fresh eyes and be honest”, today I add one more, very precise reader… who doesn’t get tired at the thirtieth paragraph.

And I’ll leave you with one question:

When was the last time someone really analysed your text, instead of just saying “it reads well”?

Share: