Resources

Fresh perspectives on reducing work friction and improving employee experiences. Research, case studies, and insights on how FOUNT helps transform workflows.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Product Knowledge

5 Friction Trends for 2025

KEY TAKEAWAYS

  • Organizations are undertaking digital transformation in order to increase productivity, but aren’t necessarily seeing the results due to the friction.
  • Because the new tech is meant to enhance and improve work, it’s important to understand exactly how that work gets done.
  • Most organizations aren’t measuring the right things, which is why friction is stalling or upending their transformation efforts.

The pace of change of digital transformation is increasing. Just look at AI. A recent McKinsey survey found that 78 percent of respondents reported using AI in at least one business function – up from 55 percent a year earlier. And that number will only climb as 2025 marches on and more and more organizations undertake transformation projects.

Why? In most cases, organizations are embracing new technology for its ability to ramp up productivity. Yet despite the big investment that these types of projects generally demand, those increases aren’t always happening. But that’s not necessarily a shortcoming of the tech.

Instead, it’s a result of something most organizations aren’t measuring: friction. Friction exists in every job, and without an explicit plan to identify, measure, and reduce it, technology will not in itself deliver the productivity gains that organizations are looking for.

That’s why any new technology investment should include an examination of friction. Here are five friction trends shaping workplaces in 2025 – and how you can address them in your next transformation project. 

1. The Unrelenting Pace of Change Is Fueling Friction

The pace and intensity of enterprise transformation efforts have increased as organizations look for ways to grow and get more done – without constantly increasing headcount. Not surprisingly, they’re turning to tech, with 63 percent of CFOs looking to boost IT or digital transformation spending as a way to increase efficiency.

One thing not many are doing as part of these efforts, however, is measuring the impact of that technology on work. Adding new tech without addressing the underlying processes that may already cause friction not only won’t improve friction, it might create more.

It’s like taking a shiny new Ferrari for a spin on a crumbling, pothole-laden highway. You have a great piece of automotive machinery at your disposal, but you’re not going to get the performance it’s capable of on a flawed stretch of road. 

New technology can fall victim to a similar problem, leading to a cycle of snowballing friction, which of course strains productivity. That’s why any transformation effort should include an understanding of how workers are reacting to the change and interacting with new technologies. Friction data can provide this insight.

2. Most Leaders Aren’t Tracking the Right Data to Measure and Drive Adoption

Because transformation has become a constant state, companies can’t afford to fall behind the curve on adoption.

In the past, when transformations happened slower and consecutively, you could bank on adoption catching up eventually. With transformations happening now in ongoing waves, however, that approach doesn’t cut it. In fact, it only leads to ever-greater gaps. To address this issue, transformation leaders need more visibility into the barriers that are holding up adoption.

Friction data can provide early quantifiable evidence of adoption. By surveying employees on the very specific tasks they perform – and how new tech does or doesn’t help with those tasks – leaders can get a clearer picture of whether their changes are improving productivity. Just as importantly, they can get insight into what to fix in a tech rollout that isn’t going according to plan.

3. Friction Data Can Help Companies Get More From Their New Technology

One important thing to remember is that new technology (such as AI) in itself is not a differentiator. The real value of any new tech comes from what workers do with it. They want new tools because these tools are supposed to make things easier. But technology is no match for a bad process or workflow.

The problem is that most leaders can’t see the connection between processes, existing tools, and their new technology. It’s the Ferrari issue again. Leaders are usually more focused on what their workers are driving (the tech they’re using) than the roads they’re driving on (their processes and workflows). 

What they’re missing is a solid understanding of how work gets done, which would allow them to see how the new tech fits into the ecosystem of the organization. But most measurement tools don’t dig deep into work. 

That’s what makes friction data such a valuable tool in a transformation project. By getting to the heart of the work at hand – the actual tasks and the obstacles that slow them down – friction provides the kind of insight that shows where new technology can make the biggest impact.

4. Leaders are Struggling to Scale Individual Productivity Gains – But Friction Data Can Help   

The productivity gains of new technology can be difficult to scale in an organization.

For example, an individual coder may be able to get a lot of value out of a particular AI tool, but expanding that value to a wider team is more complicated – not every worker will have the same experience. And the other systems and processes that coder participates in may not have changed at all.

Think back to the Ferrari. Coding is really just one section of road – a great driver or a smoother section of asphalt may lead to better results in that isolated context. But if the driver cruises for a few miles only to stall at a checkpoint for an hour – or if a coder is able to work quickly but then has to spend hours in process meetings – the benefits won’t scale.

And it’s the scale that matters. That requires a more detailed view of how all of your coders do every part of their jobs. 

Process mining can provide some good insight, of course, but only in terms of an organization’s digital systems. What it can’t measure is anything to do with the more complex phenomenon of how workers operate in those systems – both before and after the introduction of new technology.

For deeper human insight, friction data is a better way to measure the human element of tech by digging into the specific tasks that the technology is meant to enhance. It’s about changing the key question from “Do employees know how to use the tool?” to “Is this tool helping people do their work?” 

5. Today’s Effective Transformations Demand Data Beyond Engagement

Many organizations looking to execute an effective transformation will turn to engagement surveys to see how their employees are reacting to the changes. But while engagement surveys can give leaders a general idea of where problems lie, they can’t provide specifics as to what leaders need to do to fix them.

For example, 40 percent of workers may say it’s hard to get their job done in an engagement survey. But where does that leave the leader who’s looking to bring that number down?

To understand what’s really getting in the way of productivity – and to get an idea of what to do about it – leaders need measurable data about how work happens. Friction analysis uses targeted microsurveys to identify hidden friction points and map specific work activities to systems, processes, and people.

Don’t Let Friction Hold Your Transformation Back

Enterprise transformation projects are, by their nature, expensive, anxiety-inducing undertakings. And these pressures are only magnified in the current environment by friction, which can undermine even a well-planned effort.

When there are millions of dollars and high expectations on the line – and when your competition is moving quickly – leaders need hard data that will tell them whether new technology is going to deliver increased productivity. And they need it early. This is where friction analysis comes in.

Understanding how work actually happens can help take you from friction to transformation traction – let us show you how.

Read More
Product Knowledge

FOUNT vs. Process Mining vs. Employee Engagement

Whether you want to measure the effectiveness of a specific AI tool or the impact of a larger digital transformation, choosing the right data to analyze is essential. In this piece, we’ll break down the difference between three types of internal data, all of which measure some aspect of the internal operations of a business:

  1. Process mining, which measures internal processes based on analysis of data gathered from the software people use to complete those processes.
  2. Employee engagement, which measures how employees feel about their jobs.
  3. Work friction, which measures where and how work is slowed by various obstacles.

You’ll walk away with a clear sense of how measuring work friction fills the gap between process mining and employee engagement data, along with a clear sense of how FOUNT’s work friction analysis can benefit your bottom line.

Process Mining: Data Gaps on Employee Impact

Process mining aims to identify and improve inefficient processes by creating a map of every digital touchpoint involved in completing these processes. It tends to work best when every action involved in a given process is digital – that is, when employees aren’t taking additional steps that can’t be tracked.

But that’s also a major shortcoming of process mining: many workplace processes involve non-digital steps. For example, if an employee always takes a coffee break after submitting a request, knowing that the system takes several minutes to process that request.

Another shortcoming of process mining arises when it comes to the optimization of processes. While process mining may give a fairly accurate map of what a process looks like, it can’t provide any context as to why certain bottlenecks are happening.

This can pose difficulties for organizations. While a process mining exercise may show the presence of a bottleneck and therefore justify resources being spent on that bottleneck, it doesn’t provide leaders with any information on what to change.

Often, the missing information lies not in the digital systems (ERPs, CRMs, messaging platforms, etc.) but in the people interacting with them. In other words, the key insights about why something isn’t working involve how the systems impact an employee’s ability to do their job.

Process mining can’t quantify that.

Voice of the Employee: Data Gaps on the Performance of Tools

Employee engagement data – also called voice of the employee – exists on the other end of the spectrum. Typically gathered by surveys, polls, performance reviews, focus groups, NPS scores, and similar means, engagement data aims to assess employee sentiment about various aspects of work.

And while there’s real value here – employees who feel like their opinions are listened to are more than eight times as likely to satisfy and keep customers – voice of the employee data doesn’t offer any insight into why they’re feeling that way.

In other words, employee sentiment data can identify the existence of a problem but cannot reliably point to what that problem is or how an organization might fix it.

The good news: there is a metric that measures the gap between process mining and employee engagement. It’s called work friction.

FOUNT’s Approach: Track Work Friction to See the “Why” Behind Inefficiencies and Disengagement

Work friction is anything that prevents employees from doing their jobs, including people, processes, and technology. In addition to having an immediate impact on productivity – to the tune of about two hours per day per employee – work friction causes frustration for the workers dealing with it.

Unchecked, work friction can lead to disengagement and attrition in addition to productivity losses, making it a hugely expensive and often overlooked phenomenon.

Now for the good news: work friction offers a way to quantify the gap between process mining and employee experience. Process mining evaluates how the digital components of processes fit together; voice of the employee surveys assess how workers feel about their jobs. Work friction assesses how employees are impacted by specific moments of work.

An analysis of your organization’s work friction lets you see…

  • Which specific tasks and moments are sources of inefficiency for which specific employees.
  • How big an impact points of friction have on employees’ work.
  • How various tools impact employees’ ability to do their work.

From there, it’s a straightforward task to quantify which problems are having the biggest negative impact on your organization and therefore to prioritize solutions.

Put differently: While all three approaches can be valuable to an organization, depending on its needs, assessing work friction is unique in that it provides insight into how the component parts of a workplace impact employees’ ability to do their work. It is the only of these metrics that offers clarity about what an organization can change to eliminate the problems it faces.

For Data You Can Act On, Look to Work Friction

It’s important to know what’s not working at your organization. Process mining can help you understand that. It’s also important to keep an eye on how your employees feel – which is the domain of employee engagement data.

But when you need to understand the why – why a new tool isn’t increasing productivity, why the call center’s 90-day attrition rate is so high, why adoption of a new system isn’t correlating with improved efficiency – work friction data can give you answers.

If you’re curious about what work friction data might uncover at your organization, get in touch. We’d be happy to listen to your situation and show you how FOUNT works.

Read More
Webinars & Events

AI is Reshaping the HR Operating Model: Here’s What 15 Leading Companies Discovered

Written by Volker Jacobs, CEO and Founder of TI People, board member at FOUNT Global

Illustration: AI Heatmap ‘Total Change’ and ‘Associated Risk’ per HR Role

This research was co-created with 151 human experts from 15 major companies who provided more than 2,500 data points to validate and train multiple large language models. Industry experts including Andrew Spence , Jonathan Ferrar , Jörg Stegert , Stefan Hoffmann and others contributed to validating the findings.

The Traditional HR Model is Being Squeezed from Both Directions

The HR function stands at a pivotal moment. After decades of following the three-pillar model (Centers of Excellence, Business Partners, and Shared Services), the HR operating model is being fundamentally disrupted by AI from two directions simultaneously:

From the top: Business leaders increasingly demand integrated solutions to complex people challenges rather than siloed HR expertise.

From the bottom: AI is rapidly automating transactional tasks while extending into domains previously considered territory for experts.

Our groundbreaking research, co-created with 15 major European enterprises including BASF, Deutsche Bahn, Novartis, thyssenkrupp, Symrise AG, Brose Group, GXO Logistics, Inc. and validated by industry experts including David Green 🇺🇦, John Boudreau, Thomas Otter, and others reveals exactly how AI is transforming HR roles and activities.

The Numbers That Matter

For a 30,000-employee company, our research shows:

  • 29% average efficiency potential across HR functions
  • €5.2M in annual cost savings potential that can be reinvested
  • HR Specialist Operations roles face 50% efficiency impact (highest among all roles)
  • HR Business Partners see 19% efficiency potential while being pushed to deliver more strategic value

The data reveals that AI isn’t just automating HR—it’s transforming how value is created across three dimensions:

  1. Efficiency: Automating routine tasks to free up HR capacity
  2. Innovation: Enabling entirely new HR services that were previously impossible
  3. Personalization & Democratization: Extending high-quality HR services to broader employee populations in a more personalized way

“Really fascinating work – and based on the work, it’s made me realize that the impact on HR may be higher than I’d previously thought.” @David Green, Co-Author of Excellence in People Analytics

The research leading up to these and many more detailed insights can be easily infused in any large organization. It has been compiled into an ‘AI Impact Assessment’ tool, providing comprehensive, company-specific insights in just two days.

The New HR Operating Model

Article content
Evolving HR Operating Model for the AI Era

Our research validates a shift toward product-oriented HR structures, where:

  • Product Managers own end-to-end HR products aligned to key user needs
  • Problem Solvers work across traditional boundaries to address complex business challenges
  • Service Delivery teams orchestrate AI-human collaboration
  • AI handles increasing portions of transactional and analytical work

What’s striking is how differently AI impacts various HR roles. While operational roles like HR Operations Specialists (50%) and HR Controlling/Analytics (32%) show the highest efficiency potential, even strategic roles like HR Business Partners (19%) will see meaningful transformation.

Three Critical Steps for CHROs

Based on our research with companies like GEA Group, R+V Versicherung, and SEW Eurodrive, here are the essential first steps:

  1. Assess your starting point: Evaluate your current operating model maturity, AI readiness, technology landscape, workforce capabilities, and value creation potential.
  2. Define your target HR operating model: Design your product-oriented structure with clear role definitions, governance framework, and value measurement system.
  3. Build product management capabilities: Most HR functions have limited experience with product management approaches.

Organizations implementing these principles show 40% higher returns on technology investments by redefining HR offerings around user needs rather than functional domains.

The Skills Gap Challenge

The transformation extends beyond technology. Nearly half (47%) of organizations cite insufficient training as their primary barrier to AI adoption. Our research shows HR functions need a balanced approach of 60% reskilling existing staff and 40% strategic hiring across most roles.

The five most critical skills for future HR professionals:

  • AI & Machine Learning Literacy
  • Human-Centered Design
  • Digital Product Management
  • Data Science & Analytics
  • Ethical AI Governance

What’s Next?

The tipping point for AI in HR has arrived. Generative AI represents the most significant opportunity to reshape HR in decades. ‘Squaring the circle’ and delivering more (business value) for less (HR cost) becomes possible.

For the full research or to discuss how these findings apply to your organization, contact Volker for a personalized readout session.

He’ll share detailed AI impact projections by HR role, skills gap assessments, and practical implementation strategies using product management principles.

Read More
Monthly Brief

APRIL Newsletter. Friction: You Can’t Improve What You Can’t See

The pace and intensity of enterprise transformation efforts have increased as organizations look for ways to grow and get more done – without constantly increasing headcount. Not surprisingly, they’re turning to tech, with 63 percent of CFOs looking to boost IT or digital transformation spending as a way to increase efficiency.

One thing not many are doing as part of these efforts, however, is measuring the impact of that technology on work. Adding new tech without addressing the underlying processes that may already cause friction* not only won’t improve friction, it might create more.

 *Friction = is what slows work down – the inefficiencies, blockers, and extra steps workers face when using enterprise systems and processes to get things done.

NEWS

FOUNT Spotlighted by Comcast NBCUniversal LIFT Labs 

Number of the Month

https://www.deloitte.com/global/en/issues/digital/maximizing-value-using-digital-transformation-kpis.html

FOUNT in Action: Support a Smooth Merger with Pre- and Post-Integration Benchmarking

Industry: Semiconductor & Software 

Problem: A large technology company was preparing for a major acquisition. While both companies had solid internal processes, leadership wanted a way to ensure the integration didn’t introduce new inefficiencies in workflows or degrade the employee experience.

Action: FOUNT helped the company establish pre-merger benchmarks across key moments of work, allowing leadership to quantify what was working well and what needed revisions. Post-merger, a follow-up measurement was planned to assess how the integration was going.

Result: Early signs of friction (e.g., unclear equipment request workflows, fragmented knowledge hubs) were flagged, giving teams a chance to fix them before they became systemic. The transformation is an ongoing, multi-year project. We will assess benchmarks in a dozen focus areas throughout the process.

Product Feature: Get Insights from External Tools Faster Than Ever

When you next log in to FOUNT, you’ll find that you can do a direct data upload from third-party survey tools like Medallia , Qualtrics , and LimeSurvey GmbH. This automated upload makes it much easier to go from import to insight.

It also brings users of third-party survey tools one step closer to the real-time data visualization you get when you use FOUNT’s own survey tool.

Bottom line: You’ll save time, reduce errors, and get to the insights your employee data reveals faster.

Product Feature: Get Insights from External Tools Faster Than Ever

Most Recent Blog Posts

5 Friction Trends for 2025

  • Organizations are undertaking digital transformations to increase productivity, but expected gains aren’t there. The reason? Friction.
  • Because the new tech is meant to improve work, it’s important to understand exactly how work gets done.
  • Most organizations aren’t measuring the right things, which is why friction is stalling or upending their transformation efforts.

💻Spot the trends in your org

Build vs. Buy: 6 Questions to Ask Before You Try to DIY Friction Measurement

  1. What is your survey tool designed to do?
  2. Does your survey data highlight targeted improvement opportunities?
  3. Will your system scale?
  4. Where will you get your survey questions?
  5. What will your time to value be?
  6. What will your maintenance costs be?

🧰 Get the full breakdown

Process mining vs. employee engagement vs. friction data

You can’t improve what you can’t see, and you can’t see what you don’t measure. This article explains how three types of metrics can provide a better picture of what’s happening in an organization:

  • Process mining tracks everything digital.
  • Employee engagement tracks how workers feel about their work.
  • Friction data tracks obstacles to getting work done.

Friction data complements the other two by illuminating exactly where problems exist so you can focus on fixing the right thing.

📚 Read Artice

Articles We Recommend

📖 Employees Won’t Trust AI if They Don’t Trust Their Leaders

Even as AI adoption increases, employee trust in the tech is falling. Leaders who recognize the value in AI can reverse the trend by ensuring it’s trustworthy. That means taking pains to make sure its outputs are accurate and reliable. Also important: you can’t outsource genuine care to a robot. For best results, leaders need to keep the work of empathy firmly in their realm.

📖 Future of Work Trends 2025: Strategic Insights for CHROs 

A refreshingly clear-eyed look at AI, the potential impacts of losing expertise to retirement, and how loneliness might become a business liability. We found #7 particularly intriguing: 

Until next time,

FOUNT Global team

Read More
Insights & Reports

AI Implementations Need Better Validation Metrics

KEY TAKEAWAYS

  • Determining ROI – especially early – continues to be one of the biggest challenges for organizations undertaking AI transformations.
  • Metrics related to increased productivity or efficiency from AI can take years to materialize – and most organizations don’t have the patience to wait that long on such a big investment.
  • Measuring AI’s impact on the specific work employees are doing provides a good leading indicator of a project’s potential success.

One of the biggest challenges for organizations looking to harness the power of AI continues to be justifying its cost. While promises of productivity and efficiency increases sound great, AI is ultimately an investment like any other: the sooner the technology shows ROI, the better.

That’s why this issue, more than any technical consideration, is what tends to impede AI progress in most organizations. Despite 85 percent of companies reporting progress in their AI strategy execution, only 47 percent say they’ve seen positive ROI from their AI implementations.

A big part of the problem is that it can take a long time – often several years – to get data on things like productivity and efficiency improvements associated with AI. And that time horizon often doesn’t align with that of the decision-makers funding these big investments: half of CFOs will kill an AI project without ROI after a year.

So while companies are excited about the possibilities of AI, most are far less certain they’ll be able to validate the technology and prove its impact – especially in those pressure-packed, make-or-break early stages. In this piece, we’ll explain why anyone looking for a leading indicator of AI success should be looking to their workforce. 

What Type of Data Could Be a Leading Indicator of AI ROI?

As noted, most AI investments aim for increased productivity or efficiency – both of which are notoriously difficult to measure (especially in real time). An organization that could do so in some approximation of real time would have an excellent window into whether its AI tool was on track to deliver positive ROI.

To measure productivity, we need to be able to measure outputs vs. inputs. For example, work achieved against the people, time, money, energy, etc., needed to achieve that work:

  • Did we do more with the same number of people or the same with fewer people?
  • Did we increase our output with the same input costs or maintain our output while decreasing our inputs?

Likewise, when it comes to efficiency, we need to know not only whether things are getting done faster, but if the quality of work is slipping as a result of that uptick in speed:  

  • Did our people complete individual tasks faster?
  • Did our teams get through cycles – sales cycles, product launch cycles, etc. – more quickly?
  • Did we do things with fewer mistakes or less rework?

Most organizations, however, don’t have the systems in place to measure these types of things. Instead, they’re looking at…

  • Higher-level metrics (qualified leads, sales pipelines, IT ticket completion metrics).
  • Lagging metrics (turnover, quarterly revenue, task backlogs).
  • Employee experience / sentiment analysis, which doesn’t speak to the impact of an AI tool on the actual work.

For Meaningful AI Data, Ask Your Workforce The Right Questions

Where can you find answers to those critical questions and get an early gauge as to how your AI investment is performing? By measuring the work that AI is impacting.

At its core, after all, AI is worker-focused technology, designed to help employees do their jobs more quickly, more easily, and more efficiently. But understanding whether things are actually playing out that way is about much more than simply learning how workers feel about their jobs and AI from a traditional employee experience survey.

What you need instead: ask questions that target the day-to-day activities that unfold in specific roles and detail the experience of actually doing the work. Topics for a software development team working with a new AI chatbot, for example, might include things like:

  • How much time do you spend trying to find answers about the code base, on average, per week?
  • Do you consider finding an answer about the code base under the current system to be easy or difficult?
  • Does the new AI tool make finding answers in the code base easier or harder?

These types of specific questions aim to identify the touchpoints and moments that lead to work friction, which includes anything that gets in the way of a worker doing their job, including people, processes, and technology.

The goal is to uncover information about the work itself – not just employees’ feelings about that work – and to determine if it has been made better or worse by the introduction of an AI tool. Once you know where work friction is, you’ll have a better idea of where the ROI on your AI is likely to land.

Work Friction Is an Excellent Leading Indicator of AI ROI 

Measuring work friction provides insight into whether an AI tool has had a positive impact on how employees are working, allowing you to validate an AI investment very early in the rollout. By looking specifically at employees’ task-by-task experiences and isolating the moments in their work days that slow them down or cause them trouble, you can get a clear picture of how AI is impacting those moments.

Comparing work friction data from before and after an AI implementation can offer early proof of productivity increases. But even decreases or new issues uncovered by the data can be useful, giving an organization a window into how its AI rollout is going – with specific areas to adjust if necessary.

For example, if you introduce an AI tool to speed up your software development process, it may take a while to see concrete evidence that it’s actually working; and if it isn’t, it may be hard to tell why.

With work friction data, however, your developers – the people actually interacting with the AI – will tell you how the technology is either helping or hurting their productivity. And if things aren’t going well, you’ll likely have some ideas of how to improve the AI implementation (instead of scrapping it altogether).

Even better, work friction data can give you an early idea of which of your more promising AI initiatives you might want to lean into. As former Grammarly CEO Rahul Roy-Chowdhury recently noted in a LinkedIn post about AI and ROI: “By continuously iterating and assessing your AI tools and use cases, you can cut through the clutter of AI promises and double down on what’s working.”     

Don’t Use Old Methods to Measure A New Way of Working  

Just as an AI transformation signals a new way of working, work friction represents a new way of measuring work. And in doing so, it serves as a way for leaders to get out ahead of any employee-related issues or problems that might derail their AI project.

In other words, amid growing pressure to prove ROI for an AI project, work friction can be a key early indicator of success or failure. By demonstrating how workers are adopting, adjusting to, and interacting with AI, an organization can have verifiable data to validate its investment and determine its future course of action – in weeks instead of years.

Read More
Insights & Reports

How AI Tools Change Your Team’s Work (And What to Do About It)

Your new AI tool is fully integrated into your team’s workflows. You’ve made sure everyone has the training they need. Now all you have to do is sit back and watch the productivity gains roll in, right?

Not quite.

Workplaces are complex organisms. You can’t change one thing (like automating certain tasks) without ripple effects. The seldom-discussed Phase 2 of AI transformation is figuring out how automation tools change employees’ work and adjusting as needed to account for those changes.

In this piece, we’ll take a look at why AI tools can have ripple effects within an organization, how to identify these effects, and how to adapt to make sure you’re still enjoying the benefits of increased automation.

The Ripple Effects of AI Tools

Picture a call center for a telecom company. The company introduces an AI agent that can handle the simplest 15 percent of customer calls. After a period of adjustment, the tool is working great – but call center agents are facing new challenges.

Now, because all the simplest calls are handled by a bot, human agents are facing more complex customer scenarios in every conversation. What’s more, customers with complex issues still have to go through the AI agent to get to a human – and they’re often frustrated by that. In some cases, the AI tool isn’t providing helpful summaries of a customer’s needs, so that agents are forced to ask people questions they’ve already answered.

The result for call center employees is that the nature of their work has changed. Now, instead of dealing with some combination of simple and complex calls, they’re dealing exclusively with complex calls and with a higher portion of agitated callers.

To do this changed work, they may require a different skill set or different tools – stronger deescalation skills, for example. This might require additional training for some employees.

And what about the systems call center agents use to look up customer accounts? Are those efficient? Do they load quickly and let agents view a customer’s entire history with the company from one screen? If not, the wait times while agents load information could further exacerbate caller agitation.

Things continue to ripple from here. Will agents need additional breaks to recover from stressful interactions? New mental health benefits? Will you need to have additional managers available so agents can escalate calls more easily? Will you need to give agents greater latitude in granting refunds or applying promotions to appease angry callers?

I could go on, but you get the idea. Automate work, and the remaining work changes. Now let’s take a look at how you can identify and address those changes before they cause  new problems.

How to Identify What AI Tools Change

While we don’t know in any given AI transformation what exactly a new AI tool will change, we do know that it will cause changes. This is why it’s so important to have a way to track those changes and measure their impact on employees’ ability to do work.

FOUNT’s work friction framework is designed for exactly this. It involves conducting targeted surveys of impacted workers. Unlike traditional employee experience surveys, which typically ask how employees feel about their work, work friction surveys ask about the work itself: how did the AI tool impact your ability to do xyz. What is your satisfaction with doing abc. Etc.

From these surveys, we gather data on two things: moments (aka tasks workers complete throughout the workday) and touchpoints (aka people, processes, and things workers interact with to do their work).

We then assess two components of the moments and touchpoints we measure:

  1. Impact, meaning to what extent it impacts the work a person does; and
  2. Satisfaction, meaning how well that moment or touchpoint is currently working.

With this data, it’s easy to measure where the AI tool is having the greatest positive and negative impacts – and where unexpected consequences may be playing out.

Now let’s take a look at how to react to that data to make sure your organization continues to benefit from your AI investment.

How to Adapt After Implementing an AI Tool to Maintain ROI

Let’s return to our hypothetical call center. The AI bot is handling 15 percent of call volume, but the remaining callers are both more complex and more demanding than they were pre-AI tool. As a result, the productivity gain you’re seeing is only about half of what you budgeted for.

(Related: How to Assess the ROI of Current AI Initiatives & Prioritize Future Investments

You need to change something to get those productivity numbers up, but what?

The answer lies in your work friction data. Look for moments and touchpoints that have a high impact score and a low satisfaction score.

These are the components of work that make a big difference to an employee’s ability to get their job done and that are not functioning well.

Maybe you discover two areas with high impact and low satisfaction:

  1. The AI-to-agent handoff
  2. Looking up customer information in the company database.

From text responses to the survey, you learn that agents don’t have enough time to read the AI’s summary before being connected to callers. This is stressful, in part because callers tend to get frustrated when they have to repeat their situation to the human agent.

You also learn that looking up customer information is frustrating because agents have to access multiple databases that aren’t always synced. Load times can lead to delays, which can add to callers’ frustration.

From here, you have two obvious levers to pull, one of which is easy and low risk. You increase the delay between handing a caller from the AI to a human agent; when you survey workers again a week later, they’re much more satisfied with that part of their work. What’s more, text comments note that having more time makes it easier to diffuse customer frustration.

The internal systems are still a pain point, but now you’ve bought yourself some time to figure out this bigger, longer-term issue.

In AI Transformations, Launch Is Only the Start

Launching an AI tool may feel like a culmination: you’ve done the research, done the training, perfected the tech setup, and then you’re live! And while go-live may be the end of the first phase of your AI journey, it’s only the start of the rest of it.

Introducing automation to a system as complex as an employee’s work inevitably changes it, often in unpredictable ways. To make sure you’re providing employees with the tools and resources they need to do their jobs and staying on track to enjoy the benefits you planned for from your AI investment, plan to measure work friction after launch.

Identifying places of high friction will help you know what the next best steps are as you proceed down the road to becoming an organization powered by AI.

Read More
Product Knowledge

Build vs. Buy FOUNT: 6 Questions to Ask

KEY TAKEAWAYS

  • Measuring employee work is an excellent way to identify opportunities to increase productivity, reduce costs, and improve employee experience.
  • When deciding whether to build or buy a platform to measure work, consider things like time to value, in-house expertise, and maintenance costs.
  • Key to success is a system that measures NOT how employees experience changes but rather how changes YOU make impact employee experience.

If you’re considering FOUNT as a way to get clear, actionable data on employee work and how to make it better, you’ve probably wondered whether you can build a FOUNT-like system internally.

After all, you likely already have the ability to run internal surveys. You no doubt have an IT team capable of capturing data from those surveys and using it to power dashboards that track responses.  Why not combine those capabilities to create an in-house version of FOUNT offering?

It’s a question we hear sometimes. In this post, we help you answer it by outlining six questions to answer internally as you consider whether to create a home-grown version of FOUNT. We’ll also touch on how to think more realistically about resources you’ll need to build vs. buy.

Question 1: What Is Your Survey Tool Designed to Do?

Classic survey tools like Qualtrics and Medallia are often used to uncover how employees’ experience changes about their work, not to evaluate specific tasks or workflows where friction might occur. FOUNT was purposely built as a work friction tracking platform that uses targeted surveys as one part of its system. It’s not just about surveys – it’s about the combination of content, methodology, scoping tools, data analytics, and dashboards. All to provide decision-ready insights into how work gets done – and where it’s being slowed down.

For example, you may learn from a traditional employee engagement survey (or tool survey) that workers aren’t crazy about a new AI copilot intended to increase their coding output (Figure 1). The open-text responses may even offer some insight as to why: it works well for some tasks but not others, so it’s sometimes faster to do the work the old way.

That’s good to know – but it doesn’t offer any actionable insight into how you might improve the copilot.

Figure 1: Traditional employee surveys don’t always offer action-ready data

FOUNT is designed to go deeper. It can identify, for example, which specific work tasks the copilot is making more difficult (generating new code? Reviewing pull requests? Creating documentation?) for which employee populations (junior developers? Senior? Those newer to your org?).  This brings us to our next question.

Question 2: Does Your Survey Data Highlight Targeted Improvement Opportunities?

If you’ve ever struggled to get employees to answer internal surveys, you understand the problem of survey fatigue. One major driver of survey fatigue? Too many organizations don’t do anything based on the data they gather from surveys. Or else they don’t clearly communicate what they are doing. The result: employees see little point in providing answers.

FOUNT questions, on the other hand, ask about the work itself: did the copilot make it harder or easier to review pull requests? How satisfied are you with the experience of using the copilot to review pull requests? Why?

The data that comes from these surveys is simple, too: it offers decision-ready insights.

For instance, you might see that junior developers struggle with AI chatbot responses during code reviews but are satisfied when using it to generate boilerplate code – pinpointing exactly where to invest in improvements.

Read the case study: $5.4M in Annual Savings by Leveraging GenAI Tools and Removing Work Friction

We gather this data based on a proven, proprietary system (Figure 2).

Figure 2: Screenshot of FOUNT displaying survey responses

If you’re building a tool to identify opportunities for productivity increases and cost savings, you’ll need to make sure the survey component can ask questions that deliver decision-ready insights.

Question 3: Will Your System Scale?

FOUNT is built to scale. If you want to break a moment (a specific work activity) into multiple moments, you can do that without losing existing data. If you want to change the name of a touchpoint (the people, processes, or tools that support “moments”), for example, the new name autopopulates everywhere it’s being used.

When one of our customers tried to build a version of FOUNT in house, this was a particular pain point: when they wanted to change a term, they had to manually change it everywhere it appeared in the system.

It was particularly onerous because their system powered dozens of dashboards for various stakeholders across the organization, and they had to make changes for each dashboard.

Worse, they’d brought in consultants to do the initial survey question setup and had to tap those resources again when they needed to make changes. So while they were able to get to where they wanted, it was much more time- and cost-intensive than they’d hoped.

Question 4: Where Will You Get Your Survey Questions?

This is one area whose impact companies tend to underestimate. The assumption is generally that the IT setup will be the most complex part of building a FOUNT-like system in house.

In reality, the content of the questions is just as complex – and just as important to get right.

As we mentioned before: traditional employee survey tools are designed to get information about employee sentiment. People who are experienced users of these systems are great at coming up with sentiment-type questions. But they’re generally not familiar with how to ask questions to uncover the friction in the experience of getting work done.

For example, one customer that tried to build a system in house ended up asking questions that mixed up the role of moment and touchpoint. They ran initial surveys and got initial data but couldn’t figure out what to do with it.

This is because the questions weren’t structured to assess work.

FOUNT’s questions not only assess work, they go deeper and deeper until your organization has usable data on what to do about the problem areas our questions uncover.

What’s more, we have hundreds of questions from past surveys that we know work. Being able to use these on day one can save your organization months of time you’d otherwise spend drafting questions, testing them, refining them, and re-surveying employees until you got actionable data.

Question 5: What Will Your Time to Value Be?

When you work with FOUNT, time to value can be less than a month. Setting up and running an initial survey can take just a few weeks; from there, you’ll have clear insights into what’s holding your workers back from doing their jobs effectively. In just a matter of weeks, you’ll be able to create a roadmap for making changes that you can be confident will positively impact your bottom line.

If you build in house, time to value could be a year or longer. You have to…

  • Scope the technical setup of the system.
  • Build the system.
  • Write survey questions.
  • Conduct surveys.
  • Assess data.
  • Make changes based on the data.

The first three items will take the longest. But even once the system is up and running, getting decision-ready insights from your survey questions might not happen right away, as we explained above.

For one of our customers who initially tried to build their own version of FOUNT, it took a year to go from zero to running surveys – and those surveys ultimately didn’t yield data that was useful enough.

Question 6: What Will Your Maintenance Costs Be?

Finally, it’s important to consider what the ongoing costs of maintaining a home-built system will be.

One customer that attempted to build an in-house system needed two FTE employees to maintain it. The main reason was that their system didn’t include many of the automations FOUNT does.

Ultimately, they realized it was less expensive to work with FOUNT than to dedicate two FTEs to system maintenance. What’s more, working with FOUNT gives them access to more questions, easier-to-use dashboards, and better data.

To Build a Work Measurement Machine, You Need to Understand Frameworks and Methodology behind the Surveys 

To build a system like FOUNT, you need the framework, the technical setup, the engine to power and send surveys, content for survey questions, and a data analytics layer to interpret the survey answers you gather.

None of those is easy to build. What makes them particularly challenging to do without expert guidance is that FOUNT’s surveys are not traditional employee experience surveys. Think of what we do: we’ve figured out how to ask precisely the right thing to get maximum actionability with relatively few data points.

The data – capturing changes in how work is experienced – gives you the clarity to see what’s working, what’s not, and where you can make changes to have a meaningful impact on workplace outcomes.

Read More
Insights & Reports

You Just Deployed a New AI Tool. How Soon Can You Know if It’s Working?

You’ve deployed a new AI tool in your organization.

Congratulations!

Now the pressure’s off, right?

Wrong.

The pressure has simply shifted from the whirlwind of implementation to the expectation of tangible results. Whether the goal was to boost productivity, increase efficiency, or reduce costs, your AI investment is on the clock.

And that clock is ticking. While it typically takes a year or longer to see the intended impact of an AI investment, half of CFOs will cut funding for an AI project within that first 12 months if they’re not seeing positive ROI. In other words, if things aren’t working early, you may not get a chance to fix them before the plug gets pulled. That’s pressure.

Wouldn’t it be great if AI projects came with some kind of early warning system? In this piece, we’ll explain how having the right data can give you timely insight into what’s working, what’s not, and, most importantly, what you can do to course-correct a flagging AI project before it’s too late.      

Is Your AI Pulling Its Weight?

You invested in AI to solve some problem in your organization. Maybe you had a software development team that was getting bogged down in rote tasks. Or a call center that was too overloaded with simple requests to quickly respond to customers who had more complex issues.

So you deployed AI to relieve some of the stress on your employees and thereby increase their productivity and efficiency. For the development team, an AI tool is now checking software specs to give your developers more time to create new code. And in the call center, an AI chatbot is handling some of the more routine calls, freeing up employees to deal with callers who have more complex issues.

Both of these sound like great AI use cases. But are they actually working for your employees and your organization? Are your developers writing more code thanks to the extra time the AI tool is affording them? Are your call center agents helping more customers or really cutting into those hold times because of AI? Are you seeing the ROI you expected?

AI Success Is Dependent on Your Employees

The answer to all of these questions lies in something called work friction – those moments of difficulty or struggle that employees are dealing with in their day-to-day work. 

If the developers in the above scenario find, for example, that they can’t rely on the specs the AI tool is giving them, they won’t use it. The AI in this case not only hasn’t made their work easier, it’s forced them to go back and double-check information for accuracy – it’s actually created more friction. 

Likewise, if customers are having issues with the call center AI chatbot, they’re likely going to call back and wait to speak to a human being. Now those employees will not only be fielding the calls that AI was supposed to cover, but they’ll be dealing with callers who are upset from that bad previous experience. Here too, they’re dealing with even more friction.

In both cases, if the employees have a choice, they’ll likely disable the AI agent or cut it out of their workflow.

Measure AI Effectiveness Using Worker Data  

AI is user-dependent technology. If employees don’t see AI helping in specific areas where they’re encountering work friction, they won’t adopt the tool and you won’t see the results you’re hoping for. This is actually how many AI failures unfold – it’s not that the technology wasn’t good, but it wasn’t a good fit for employees in the specific use case it was supposed to help.

That’s why work friction is a key leading indicator of AI effectiveness. By getting to the heart of where your employees are experiencing friction, you can gauge whether the tool you’ve deployed is helping to ease their burden. Is your AI solution addressing the specific touchpoints or processes that are slowing down your employees? Is it making their work easier? Is it freeing them from routine tasks to focus on higher-value work? 

If the answer to any of these questions is yes, you’ll likely see the productivity gains you expected from AI. If not, work friction analysis can provide valuable data you can use to adjust your deployment and try again.

For example, we recently worked with a financial services firm that rolled out AI for its development team with a goal very much like the situation described above. But the company didn’t have a clear idea of how the tool was going to impact the developers’ biggest problem areas. Perhaps unsurprisingly, those employees found it didn’t help much, so they stopped using it. The company had a failed AI investment on its hands.

With a detailed work friction analysis, however, the firm was able to see that developers were running into issues reviewing pull requests, finding answers about the code base, and writing technical documentation. 

Now the company had a roadmap for adjusting the tool based on user feedback and redeploying it. When the firm implemented a GitHub Copilot to help with documentation and code review, the development team embraced it, which reduced their work friction.

And the company tallied $5.4 million in annual savings.

The Pressure Is on For AI to Deliver – Make Sure You Meet the Moment

The AI revolution forges ahead. A recent Wharton study found that while only 37 percent of large firms used AI weekly in 2023, 72 percent did in 2024. And the momentum doesn’t seem likely to slow in 2025.

Yet even as the pressure to deploy AI continues apace, the greater onus on leaders now will be to show the results of those investments. That can take time, of course, but one way to get an early idea of just how well (or how poorly) an AI experiment is going is to see how your employees are reacting to it.

Work friction data can be that leading indicator. Get in touch to find out how we can help provide the insight on whether your AI investment is headed for success – and, if it’s not, what you can do to fix it before it’s too late.

Read More
Monthly Brief

March Newsletter: Don’t Use Old Methods to Measure A New Way of Working

AI isn’t just about deploying new technology – it’s about fundamentally changing how work gets done. But that change doesn’t happen all at once. It unfolds in daily tasks, team interactions, and the moments that make-or-break productivity.

That’s where work friction comes in.

Just as an AI transformation signals a new way of working, work friction represents a new way of measuring work. And in doing so, it serves as a way for leaders to get out ahead of any employee-related issues or problems that might derail their AI project.

Measuring work friction provides insight into whether an AI tool has had a positive impact on how employees are working, allowing you to validate an AI investment very early in the rollout. By looking specifically at employees’ task-by-task experiences and isolating the moments in their work days that slow them down or cause them trouble, you can get a clear picture of how AI is impacting those moments.

The beauty of assessing work friction is its precision. You no longer need to guess if your technology is effective. Instead, you have data-driven proof points to guide decisions, adjustments, and validate early-stage investments.

FOUNT Platform: Our process for calculating potential ROI for fixing problem areas

FEATURED USE CASE

Reduce Product Waste by Reconfiguring an AI Tool

One large retail company was exceeding its product waste targets by upwards of $10 million, but they’d already made all the obvious changes to reduce waste. To reach their targets, they’d have to find and eliminate hidden causes. Through a work friction analysis, input from sales managers revealed the AI-driven ordering system frequently caused overstocking, resulting in expired products and strained customer relationships. 

By refining the AI algorithm – adding considerations for seasonality, promotions, and allowing manual adjustments – the company expecting to reduce waste by 20 percent by Q4.

More examples of FOUNT use cases here: How Customers Use FOUNT: Accelerate AI Adoption, Reduce Waste, and Measure ROI Sooner – FOUNT

THE MOST RECENT BLOG POSTS

How AI Tools Change Your Team’s Work (And What to Do About It)

  • When you automate work with AI, you change the nature of the remaining work.
  • To assess how an AI tool has impacted employees’ work, look for work friction – i.e., high-impact work tasks with low satisfaction scores.
  • Address areas of high work friction to ensure positive ROI on your AI investment.

📖 Read it here

AI Transformation Playbook: The Definitive Guide to Measuring, Rescuing, Prioritizing, and Scaling AI Transformations

In this playbook, we’ll lay out everything you need to know to measure the effectiveness of the AI implementations so leaders can: rescue failing tools, prioritize future projects, and scale successful investments.

📖 Read it here

Worker Impact Is the Common Denominator of Every AI Transformation – And the Best Early Indicator of Success

As AI investments surge, organizations are under pressure to prove ROI faster. Yet, traditional methods fall short in showing early results. This blog explains why the best early indicator of AI success is its impact on employee work – specifically, by measuring work friction.

By classifying AI tools based on the type of work they affect – highly defined, open-ended, or enterprise services – leaders can tailor their measurement strategy. Work friction data, gathered through targeted surveys, reveals where AI is helping or hindering daily tasks, offering fast, actionable insight long before financial metrics show results. Bottom line: only employees can tell you if AI is working.

📖Read it here

ARTICLES WE RECOMMEND

📖 Employees Give Feedback, But Leaders Too Stressed At Work To Act On It.

By Dr. Diane Hamilton, a business behavioral expert, for Forbes

Dr. Hamilton highlights a troubling cycle: employees regularly share feedback but disengage when their efforts go unacknowledged due to overwhelmed leaders. Breaking this cycle requires intentional communication, targeted action, and shared accountability to transform feedback into tangible improvements.

📖 Superagency in the workplace: Empowering people to unlock AI’s full potential

By Hannah Mayer, Lareina Yee, Michael Chui, and Roger Roberts for McKinsey & Company

McKinsey’s 2025 report, “Superagency in the Workplace,” underscores a significant disconnect between employee readiness for AI adoption and leadership perception. While 92% of companies plan to increase AI investments over the next three years, only 1% consider their AI deployments mature.

FOUNT’s data model translates worker experience into actionable data

AI isn’t simply about tech – it’s about people and the work they do every day. That’s why measuring work friction matters. It gives leaders early, actionable insight into whether AI tools are actually improving the experience of work – not just in theory, but in the tasks and interactions that make up the workday.

The payoff? Faster course correction, better adoption, and a clearer line of sight into whether your AI investments are delivering value for both the business and your people.

In a transformation where every step counts, understanding work friction isn’t just a nice-to-have – it’s how you stay on course.

Until next time,

The FOUNT Global Team

Read More

Don't miss our latest content

Subscribe to our monthly newsletter

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.