Security experts are sounding the alarm: “Apps are spying on us”. But you may have just installed from Google Play or AppStore an app for some service or marketplace — because it promises bonuses or cashback through the app — and nothing bad seemed to happen to you. You might think these experts are just chasing hype and fear.
But in reality, by installing an app for an online shop, a taxi service, a ticket booking platform, a delivery service, loyalty programs, marketplace app, rental app, and so on, you did more than just give away your data. You may begin to pay more with every next order. App-based surveillance allows companies to know what is happening in your life and adjust prices so that you are likely to pay them.
Companies want people to install their apps. They aggressively push their applications onto your smartphone. You can’t just get a loyalty card of a retail store — you need to install and sign into the store’s app. Another case: a coffee shop made it possible to order a coffee only through its mobile app.
Without urgent need, apps heavily promoted by commercial entities should not be downloaded. Companies use various incentives: offering to place an order through the app instead of a self-service terminal, promising free bonuses or discounts if the app is installed at the time of payment.
In practice, however, the user often gives up more than they receive, even if it is not felt immediately. This primarily concerns personal data and the digital footprint, which can be monetized far more effectively than simple advertising display.
The first reason for caution is connected to the era of so-called surveillance capitalism. Companies aim to collect as much information as possible about consumer behavior, financial condition, daily routine, habits, and vulnerabilities, and then use this data to apply personalized influence.
Most people think that they only receive clumsy advertising, such as when, after buying an iron, their feed fills up with ads for irons. However, one coffee chain several years ago settled a class action lawsuit for its mobile app heavy tracking. Its app recorded user movements every few minutes — even when the app was not actively used. Affected users were promised a free doughnut and a coffee as a lawsuit settlement.
But a much more serious trend is personalized pricing. A company whose app has permission to read SMS messages — because people do not want to keep pressing buttons and prefer the app to automatically read authorization codes from SMS — can find out that a person has just received their salary, because the bank sends a SMS notification about the deposit. At that very moment, the prices inside the company’s app may slightly increase for that particular user.
As a result, the price is not shaped by the market, but by an algorithm that evaluates the purchasing power of a specific individual at a specific moment.
Even more alarming is the fact that such pricing gradually transfers to corporations the function of determining the value of money. The value of money is the amount of goods you can buy for the same sum today and tomorrow. For example, if today a smartphone costs $500, and tomorrow it costs $600, then money has become less valuable. Its purchasing power has decreased.
Without personalized pricing models, everyone pays the same price for the same product, for example, an ordinary hamburger. People with higher incomes can buy more, but the price of the product itself does not change depending on the purchasing power of a particular person. A poor person and a rich person will receive the same price for a hamburger, provided companies cannot spy on them through an app.
When companies gain the ability to set an individual price for each person, they effectively determine how much one hryvnia or one dollar in your wallet is worth to you. When a company knows that you can pay more, it will increase the price for you individually. This concentrates a disproportionate amount of power in the hands of businesses, which already wield enormous influence.
The second reason is less obvious, but no less dangerous: mandatory arbitration. Many services include in their terms of use a clause stating that in the event of a dispute, the parties will not go to court, but will agree to a private dispute resolution procedure through a mediator.
Unlike judges, who are paid through public taxation and are part of the public justice system, the mediators in these cases are hired by the company itself. This creates a serious conflict of interest and effectively deprives the consumer of full legal protection.
During a normal purchase in a physical location, a consumer cannot be forced to immediately agree to legal terms containing mandatory arbitration. In contrast, installing an app almost always requires acceptance of Terms of Use, where such a clause may be hidden in long, fine-print text.
Moreover, the scope of app terms sometimes extends even to situations that are not directly related to the use of the app itself. There have been widely discussed cases where companies attempted to apply such terms in situations involving serious bodily harm or death, citing the fact that a person had previously agreed to the terms of use of the corporation’s application. Become a member
For example, in 2024, a married couple using a ride-hailing service was involved in a serious car accident. However, the court dismissed the lawsuit against the company, since the couple had previously agreed to arbitration in a separate food delivery application owned by the same corporation.
Only public outcry forces companies to back down in such cases. And even then, this happens far from always.
In some countries, including the United States, mandatory arbitration is supported by legal precedent, and there are almost no real mechanisms for government intervention. Therefore, the main responsibility for protecting one’s rights falls on the users themselves. The simplest and most accessible tool in this situation is to avoid unnecessary applications and not agree to questionable terms.
Next, it is worth looking at the broader context. Personalized pricing in economic theory is often called “price discrimination” and is presented as an allegedly efficient model. The idea is that different customers pay different prices depending on their likely willingness to pay.
If one person gets a product for two dollars and another finds the same product for one dollar, this effectively means that the value of money is different in these two cases. Economists explain this as an attempt to avoid idle resources, such as in the case of a hotel, where a room may be sold cheaply at the last minute if it would otherwise remain empty.
However, economists tend to live in theoretical clouds — in practice, their models work in the opposite way.
As a result, the logic of setting prices according to purchasing power for the sake of better resource allocation collapses when one considers the volume of data that corporations possess today.
A company is not going to set a lower price. If, for example, a hotel owner can learn about a person’s financial status, health issues, urgency of travel, family circumstances, or even the presence of an emergency in the region, they can reasonably estimate how vulnerable that person is and how much more they are willing to pay. In such a model, the price is no longer the result of market forces, but the result of digital analysis of someone else’s vulnerability.
Here is a real-world example: during a major concert of a top pop singer in a Canadian city Vancouver, rental prices increased by up to ten times. On the days of the concert, hotels offered rooms for $2,000 per night, while on regular days those same rooms cost around $240.
Another case — Los Angeles wildfires in 2025. Rent prices skyrocketed up to 300% despite local authorities reminded that in a time of emergency goods and services — including hotel rooms and rental housing — cannot be raised more than 10%.
Alongside emergencies or major events, price gouging happens on a daily basis with so-called “price optimization centers”. These are companies, that receive data from businesses. In return, businesses receive recommendations on increasing prices. Formally, this is presented as analytics, but in practice it leads to synchronized price growth across entire sectors. These mechanisms are already working in the housing rental market and in many other industries.
Some regions, including certain U.S. states, have banned algorithmic rent inflation, but a law is just paper with text on it. Businesses can use their resources to attempt to overturn such laws, for example by arguing that providing algorithmic recommendations for rent increases is a form of lawful speech.
The same tools are also used against workers by depressing their wages, sometimes lower than minimal wage. Researchers from the University of Oxford and Worker Info Exchange have shown how a major ride-hailing service pushed the wages of its gig workers down by almost 15% after introducing algorithmically determined pay.
In gig-based employment, algorithms can assess a person’s financial difficulties through credit history and offer lower pay to those who are in a more vulnerable situation. This leads to the most vulnerable receiving the least. This approach is known as algorithmic wage discrimination.
The combination of a company’s internal data with information purchased from data brokers further reinforces this trend. As a result, a less visible but equally dangerous problem is formed: millions of people systematically receive lower compensation based on the digital profile of their lives.
Attempts to restrict the activities of data brokers and personalized pricing are periodically made in various countries, but politicians receive money from companies — this is politely called lobbying. As a result, initiatives that would improve the situation for ordinary people are usually canceled or frozen.
Against this background, European and Canadian regulatory practices are becoming increasingly significant, where public authorities at least attempt to investigate the scale of the problem and involve the public in discussion. In some countries, public consultations on algorithmic pricing and its impact on competition and consumer rights have already begun.
The overall picture looks like this: while part of the world is effectively abandoning the protection of citizens in the digital economy, other regions are only beginning to recognize the scale of the threat. Personalized pricing is being transformed into a tool through which corporations determine who is most vulnerable, and it is the poorest and least protected who ultimately pay the most.
In such a situation, refusing unnecessary applications, paying close attention to terms of use, and minimizing one’s digital footprint are no longer paranoia, but basic measures of self-protection in the new reality.