Social media networks have become a constitutive element of modern society, without which present-day democracies could neither function nor be properly understood. Take, for instance, Facebook. This has become the most common medium of communication and for the reception of news – whether this is about a new restaurant in town or recent events in Syria. It is therefore not surprising that Facebook has also become a key hub for political advertising. In the recent UK elections, for example, the opposition Labour Party spent some one million pounds sterling to place advertisements on Facebook. Similarly, in the 2016 US presidential elections the Trump campaign heavily invested in digital advertising. As a result, it would seem that what the average voter sees in his or her news feed on Facebook is not neutral. But neither was the older, non-digital media. So what is new? The novelty of social media lies in the degree of its efficiency as an instrument of profit and power and in the damage it could cause to the minimum standards of social trust and individual rationality necessary for a functioning democratic society. Whereas the old media used advertising to influence opinion, social media today has access to the opinion first and then places its advertising over it. On social media, users are targeted by one set of advertisements based on their personal page, with each of them being targeted by a different set of advertisements. The problem with this “personalisation” of the Facebook news feed is, as US academic Cass Sunstein has noted, the increasing risk it poses in terms of leading to “fragmentation, polarisation and extremism.” This is because personalised pages are designed to reproduce similar opinions, and, in turn, to reduce the individual's exposure to, and conversation with, opposite views. When these individuals meet in the real world, there is little room left for discussion, and the political environment becomes fertile soil for populist leaders who dismiss the other side of the debate as so much “fake news.” These leaders can take advantage of Facebook's targeted advertising to spread their own “fake news” to gain power. This was what Trump strategists did in their campaign to demoralise African-American supporters of Democratic Party candidate Hillary Clinton during the 2016 US elections. “Hillary thinks African-Americans are super-predators” was one message sent out on Facebook to discourage such voters from voting for her. In this era of “post-truth” politics, a ubiquitous term these days, truth is not the only victim. Rather, the challenges this politics pose threaten the whole fabric of society, which needs a minimum degree of trust and cooperation to deliver a functioning democracy. Even if one overlooks the issue of social trust, there is a second, perhaps even graver, issue that arises when social media advertising undermines individual rationality. Online social networks such as Facebook, but also platforms such as Google, Apple and Uber, collect data to improve their services based on self-learning algorithms. The customer is the provider of this data. In return, he or she receives “free” services. But there is always a cost the customer pays: the algorithms are designed to develop customer profiles based on behaviour and history. This data reveals subjective biases, which could then be used to manipulate the customer to pay the maximum price for goods or services he or she is willing to pay, instead of simply the market price. Such price discrimination, designed to maximise profit, had been found on travel websites such as Orbitz and Expedia, which have directed some users to more expensive hotels based on their background information. The European Commission also recently fined Google for giving “illegal advantage” to its own shopping services. There is no doubt that such abuse can, and will, pass to the political realm, from selling products to selling candidates and from profit to power. A study by US academics Robert Epstein and Ronald Robertson has concluded that “Google's search algorithm can easily shift the voting preferences of undecided voters by 20 per cent or more – up to 80 per cent in some demographic groups – with virtually no one knowing they are being manipulated.” Such shifts are sufficient to change most US election results, half of which have been won by margins of below eight per cent. In the digital world the individual is thus shadowed by technology. The latter does not only recommend travel choices, music and books, but also learns about users' habits and biases and translates these into targeted pricing and political advertising that proactively steer economic and political decisions. How do we know these decisions are in our best interests? There is no doubt that a technology that can manipulate individual cognitive biases can not only be ruthlessly employed in the service of profit and power, but can also transform the once presumed rational individual, the foundation of modern democratic society, into the irrational tool of an algorithm. The question of rational self-interest in this case becomes irrelevant, for then there is no more genuine “self” left to be interested. We live in the digital age when not only can the line between freedom and censorship be blurred, but so can that between truth and lying and rational and irrational choices. The dark side of this age is the threat it poses to social trust and individual rationality, risking the undermining of the foundations upon which modern democratic society is built in the interests of power and profit. The more people are aware of this dark side, the less likely is it that its worst results will come true. It would be better sooner rather than later, therefore, to appreciate the true cost of free services on the Internet and to browse widely before making economic choices or settling on political views. The writer holds a PhD in international relations and teaches at the University of Leicester, UK.