AI workslop is already part of how we work as EAs, even if we haven’t been calling it that yet. It’s worth defining it properly from the start, because once you see it, you notice it everywhere. AI Workslop is low-effort AI-generated content that looks polished and complete, but hasn’t been properly checked by the person who created it or really thought through or adjusted for the situation. The problem with AI workslop is that after it is sent out into the world, it often shifts the work to the person receiving it, who then has to review it, fix it, and make it usable.
If you’re supporting an Executive right now, you’ve probably already noticed this type of work. We’re receiving more AI workslop in the form of emails, summaries, and reports. I’ve even noticed people clearly using AI to reply to what I’m saying during calls. So what does this work look like? On the surface, it looks structured and ready to go, but when you actually sit down with it, it takes time to work out what it’s saying. Usually, it’s overly complicated and wordy; context is missing, and you, as the EA, have to figure out what needs to change before it can be used or passed on to your Exec.
As EAs, we’re often the ones who will start seeing AI workslop first, or notice it more often than others as it becomes part of everyday work. You might already be spotting it in drafts sent for your Executive’s approval, emails, reports, and summaries, or you might just be starting to notice it creeping in. Either way, this is an area for EAs where I can see growth potential and where we can add a lot of value. We can learn to spot AI workslop early because we’ll see it a lot. If we have a good understanding of the business, we can identify what needs to change and decide how much of it should be refined before moving forward or on to your Executive for their review.
I don’t think I can understate how much of a problem this is already becoming. Research shows that 41% of employees can recall receiving AI Workslop that affected their work, and more than half admit to sending it themselves. So this isn’t a one-off problem; it’s happening really regularly. It also impacts how people feel about the work they receive and the person who sent it to them. Poor-quality AI output can erode trust, create confusion, and lead to more back-and-forth. I’m sure we’ve all seen an AI-generated email and thought ‘what do they actually mean?’
So while AI Workslop might feel like a new problem, it’s already been a few years, and is starting to be built into how work is being shared. The difference is now, we’re starting to recognise AI workslop for what it is and the impact it has on our day-to-day work.
These 150 practical, proven prompts will help you save time and improve productivity.
Executive Assistants in our community rave about how Copilot has transformed their workflow. From faster board packs to better client reports, they’re working smarter, not harder.
Why is this happening across organizations?
AI workslop isn’t coming from a single person or team. We’re seeing AI workslop across organisations, and when you look a bit closer, it is clear why so many of us are using these tools to generate work that is subpar.
We are all being told to use AI. At the same time, we are all being encouraged to move faster, automate work, and increase our productivity because the technology is now in place to support us at this pace. There’s constant pressure to produce more, and AI feels like the quickest way to do that. But in many cases, there isn’t clear guidance on how to use it well, or what good output looks like, which unfortunately leads directly to more AI workslop. There’s also a gap between the number of people who have access to these tools and those who have been trained to use them. How many of you have been given a licence for Copilot or ChatGPT with no real training on how to use it in your role? That training gap adds to the quality of what gets produced.
So people generate content, pass it on, and move to the next task, and AI workslop moves with it.
Again, the research shows AI workslop is happening at scale. AS I’ve said, over half of employees admit they send low-quality AI workslop at least some of the time, and around one in ten say that half or more of what they send is unhelpful or low quality. So its not like we don’t know we are doing it! And the data is telling us AI workslop isn’t rare, it’s now part of everyday work.
There’s another pattern behind AI workslop that’s worth calling out. Work is still expected to move quickly, and AI gives people a way to produce something really fast, even when they haven’t had time to fully think it through or sense-check it. There is a ton of research suggesting that AI is actually making us dumber, mostly because we aren’t checking the output from these tools. So instead of finishing the work, it gets passed on in a half-complete state, and AI workslop moves to the next person in the chain.
You’ll see this in practice with drafts of emails to clients that don’t actually answer the question, summaries of meetings that miss key decisions or next steps, or reports that sound polished but don’t reflect what is really happening. The output exists, but the thinking hasn’t been fully done yet, and that’s where AI workslop sits, moving through teams instead of being resolved at the point it was created.
I had this recently with a supplier. We went back and forth over email four or five times, and it was clear they were using AI to write their replies. Each response became more confusing, and none of the emails actually answered the question. In the end, I had to pick up the phone to get a straight answer. It took way more time than necessary, and I don’t think I’ll use that supplier again. We used to have office phrases that didn’t say much, like “let’s circle back” or “blue-sky thinking.” Now we’re seeing the same issue in a different format with AI workslop, where the words are there, but the meaning isn’t.
So what we’re seeing is a pattern. More AI workslop being created, and more AI workslop being passed along before it’s ready.
The shift for EAs to quality control
This is where the shift begins for EAs. We can start to act as a filter for AI workslop and as a quality control point before it moves forward. There is a real opportunity for us to step into a new area created by the inclusion of these AI tools in our work.
So you might already be noticing AI workslop coming through your inbox, or you might just be starting to see patterns in how it shows up and who the main culprits are for this type of work. Either way, this is a skill we can really build this year. Spotting AI workslop early, deciding what needs to change, and choosing when to step in before it moves forward.
What would this look like in practice? It could be reviewing and shaping AI workslop before it reaches the next step, whether that’s a stakeholder, a client, or a wider team. It can also mean pushing work back when it isn’t ready. That part is tricky because it does take confidence. Saying this needs more work, this doesn’t answer the question, or this needs to be rewritten before it goes out isn’t always easy, but it sets a standard for what good looks like.
For many EAs, this won’t feel like a formal part of the role yet, but it’s definitely starting to become one. The more AI workslop moves through organisations, the more valuable it is to have someone who can spot it, question it, and make a call on what happens next. Let me give you a few more examples of how you can push back on AI workslop and how to approach this work:
-
Checking accuracy and making sure the AI workslop actually reflects what’s going on
-
Rewriting AI Workslop for clarity so the message makes sense
-
Adjusting tone so the AI workslop sounds like our Executive if the work is supposed to be coming from them
-
Adding context where AI workslop has kept things too generic
-
Flagging when AI workslop doesn’t add up or needs a rethink
If you think about your day, I’m sure you have already handled AI workslop multiple times without calling it out. I’m sure you’ve thought this work feels off, or frankly, is clearly AI-generated and isn’t very good. You can add so much value by calling it out, not passing it on, or simply asking ‘what do you actually mean here?’
The other thing to add is that editing or pushing back on an AI workslop protects your Executive’s time, their reputation if the work is supposed to be signed off by them, or it’s being written in their name, and also the quality of the decisions being made. Quite often, we are the last point before AI workslop potentially goes out into the world, which puts us in a very influential position.
How EAs can manage AI workslop without taking on more work
AI workslop can easily turn into more work for us, so we need some structure around how we handle AI workslop day to day.
The first step is setting expectations with your Executive. Have a conversation about what good output looks like and where AI workslop needs to be reviewed before it’s shared. You might need to share some examples with them so they can see what you mean. Be really clear on what needs to be checked and what level of detail is expected, so the AI workslop doesn’t keep coming back to you. You’ll need your Executive’s backing if you are going to push back on some of this work, because people tend to deny using AI to generate any of their work. Expect denials that AI was used at all.
You can also agree when AI should be used and when it’s easier to just write something properly from the start, which reduces AI workslop at source. I can see a checklist or guidance piece being written here – again, a great way for you to add value.
In fact, a simple review checklist would really help when you’re dealing with AI workslop. When something lands in your inbox, run through:
-
Is this accurate
-
Is this specific to our business
-
Would this make sense to the person receiving it
If the answer is no, then the AI workslop needs more work before it goes anywhere else.
It’s also okay to push back on AI workslop. If something creates more work downstream, it’s worth saying so and explaining why. If you are known as an EA who doesn’t allow this type of work to get to your Executive, you’ll notice how your stance reduces the amount of AI workslop being shared in your team.
And finally, keep examples of the AI workslop you’ve edited or worked on. When you can show the difference between raw AI workslop and what you’ve turned it into, it becomes much easier to explain the value of what you’re doing and a new area you are moving into.
I think it is fair to say AI workslop isn’t going away. So the more we shape how AI-generated content is used, the easier our day-to-day work becomes and the more visible our value as EAs becomes.
When you recognise AI workslop as a pattern in your office and start doing something about it, it’s a really proactive move to take because you are influencing the quality of what gets created in your team. That might look like setting a standard with your Executive, pushing work back when it isn’t ready, or guiding others on what good looks like before something is shared.
Over time, people will start to notice. They know that if something reaches you, it will be checked, it has to make sense, and reflect how your Executive actually works. That builds trust in you as an EA because you are demonstrating you have high standards, and it also reduces the amount of AI workslop that gets passed around in your team. Win-win!
So if you’re looking for where you can add value this year, this is a very practical place to start. Notice where AI workslop is coming from. Decide where you step in. Set a standard for what good looks like. And be consistent with it.


