The Fear: Will AI Take Our Jobs?
Many people are worried about whether artificial intelligence (AI) will take their jobs. This question comes up everywhere: at dinner, in meetings, and in online chats.
For the past two years, tech companies in Silicon Valley often said "yes," AI would take jobs, and quickly. They seemed ready to let AI do almost everything.
But we need to look closer. Not everything that looks good is truly good. Sometimes, the real truth is different.
Salesforce's Big Lesson About AI
Salesforce, a very important software company, once strongly believed that AI would replace human workers. But then, it changed its mind.
The story from Salesforce is not just about machines winning over people. It teaches us something much deeper and more important for workers, leaders, and lawmakers.
There has always been a debate: Will AI take over, or is human intelligence always better? Recent events suggest that the strong belief in AI might be too big and could soon lessen.
The Rise and Fall of AI Trust
A year ago, companies and employees trusted Large Language Models (LLMs) โ a type of AI โ very much. It felt like a solid truth. It used to take effort to write an email, but now AI can write perfect emails with just a simple request.
AI can also summarize meetings, write computer code, and create presentations very fast. But even though AI looks amazing on the surface, a closer look shows problems.
Companies that fired employees to replace them with AI are now rethinking their choices. Salesforce is a clear example of this.
Sanjna Parulekar, a senior leader at Salesforce, admitted that people inside the company now trust these AI models much less. The idea that AI could be an all-purpose "smart worker" is starting to break down under real-world use. This is important because Salesforce is a huge company that helps thousands of businesses manage their customers. When such a big company changes its mind about AI, it signals a larger trend.
When AI Replaced People (and Failed)
The concerns started with numbers. Salesforce cut its support staff from about 9,000 to 5,000 employees, which is 4,000 jobs. CEO Marc Benioff openly said that AI agents were taking over work that humans used to do.
This news spread quickly, making many office workers fear that their jobs, once thought safe, were now at risk from AI. For many, the message was clear: AI didn't need to be perfect to cause big changes; it just needed to be "good enough."
Problems with AI Started to Show
However, as AI agents were used more widely, problems began to appear:
- Too Many Instructions: Muralidhar Krishnaprasad, a tech leader, found that big AI models often forget instructions if given more than eight. This might be okay for simple chats, but it's a serious problem for business operations where accuracy and rules are very important.
- Missing Tasks: For example, Vivint, a home security company with 2.5 million customers, found that AI agents simply failed to send customer satisfaction surveys. They did this without warning or explanation. Salesforce had to add "deterministic triggers" โ simple, rule-based systems that always do what they are told โ to make things reliable again.
- AI Drift: Executives also talked about "AI drift," where agents lose focus if users ask unrelated questions. A chatbot meant to help a customer fill out a form might suddenly get sidetracked and forget its main job.
These are not minor bugs. They show a core problem: can we truly trust AI with important duties?
The Return to Simple, Reliable Tech
What Salesforce is doing now is very telling. The company has started to promote "deterministic" automation. These are systems that might be less flashy and less like talking to a human, but they are much more dependable.
Simply put, Salesforce is rediscovering the value of "boring technology": software that behaves the same way, every single time.
This means that the idea of "AI-first" is being put aside, at least for now. Even Marc Benioff, who was once a strong supporter of AI, now says that strong data foundations (good, reliable information) are more important than AI models for Salesforce's plans.
It's ironic: at the same time AI is blamed for losing thousands of jobs, the company using it is pulling back from trusting it too much.
What This Means for Your Job
The Salesforce story becomes more complex and honest here. The job cuts were real. People lost their jobs. But the technology that replaced them is not the perfect, all-knowing machine we often imagine. Instead, it's a fragile system that needs rules, checking, and often human correction.
What disappeared at Salesforce was not work itself, but a certain way of doing work. AI agents took over tasks that were:
- Repeated often.
- High in volume.
Humans were removed from jobs focused on doing a lot of tasks, rather than making judgments. But when judgment, understanding small differences, and being responsible mattered, AI failed.
The hard truth is that companies might not be replacing humans because machines are better. Instead, they might be trying to save money and are okay with some mistakes.
So, Will AI Take Your Job?
The Salesforce story suggests we ask a more precise question: What kind of work do you do?
Tasks that are repetitive, follow few rules, and where small errors are acceptable are definitely at risk. But jobs that need context, deciding what's most important, and being responsible still strongly need humans.
For now, AI is not a worker. It's a tool that can make things faster, whether it's efficiency, mistakes, or showing a company's values. If used carelessly, it replaces people and breaks systems. If used carefully, it shows how much we used to take human judgment for granted.
The Key Lesson
Salesforce's partial step back is not an AI failure. It's a reality check.
The future of work will not only depend on how fast machines improve. It will also depend on how honestly companies admit what machines still cannot be trusted to do.
And that, perhaps, is the most comforting lesson of all.