CMR is the leading provider
of funding and management
support for small to
medium-sized businesses and
entrepreneurs
Established 1984 C MR
is the leading venture
capital, management
support and business
services provider for
small to medium-sized
businesses - linking
excellent management
skills with the
substantial financial
resources of a global bank
of private investors.
CMR has over 450 senior
executives, operating
in the UK, USA, Europe, Asia,
Australasia and
globally,
providing both funding and
specialist help for
entrepreneurial
businesses .
For Businesses
CMR provides excellent
resources:
CMR FundEX Business Exchange - gives all companies & entrepreneurs direct access to CMR's global investor base.
CMR Catalyst Group
Programme -
transform
profitability through
merging.
CMR Company Sales Division helps owners to exit
at the best price.
CMR Corporate Recovery
Division -
experts in rescue and
turnaround.
CMR Technology Licensing
Division -
commercialising
innovation.
CMR Executive
Professionals - management support
and consultancy.
CMR Executives-on-Demandâ„¢ Fully experienced
senior executives
available quickly and
cost effectively.
We always welcome
contact with new
business clients- please get in touch
- we will do our
best to match
your needs and exceed
your expectations.
For Investors
Preferential access to new opportunities for investment and/or acquisition
P re-vets
propositions and
provides a
personalised service
to our investors
Syndication service
enabling investors to
link together as desired
Executive and
management support for
investments as needed
CMR's services to
our investors are not
only fast & efficient
but also free
W e
always appreciate new
members- you are welcome
to join as an investor
or as a CMR Executive.
When you
join us as a Senior
Executive:
CMR's strength is in the
skills and experience of
our executive members -
all senior, director level
people with years of
successfully running and
managing companies.
Because the demand for
CMR's support and services
is ever-increasing,
especially as we enter
recessionary times, we
have a growing need for
more high calibre
executives to join us from
every industry and
discipline.
You will be using your
considerable experience to
help smaller businesses
and entrepreneurs to grow
profitably.
We offer full training
and mentoring support to
help maximise potential.
We are
always keen to find more
high calibre senior
executives in all areas-
skills and location.
Make contact with us today
and maximise your
opportunities.
HEAD
OFFICE
124 City Road
London EC1 2NX
Tel: +44 (0)207-636-1744
Fax:+44 (0)207-636-5639
Email: cmr@cmruk.com
Registered Office:
124 City Road ,
London EC1 2NX
Also Glasgow,
Dublin, Switzerland, Europe, USA/Canada
Privacy Statement: CMR only
retains personal details
supplied directly by executives
joining CMR themselves either as
Full Executive Members or
Interim Management Members or
Investors. Those details are
only used within CMR and not
disclosed to any third parties
without that person’s
agreement. We will keep that
data until requested by the
person to be removed – at that
point it will be deleted.
Personal data is never sold or
used for purposes outside of
CMR’s normal operations. Any
correspondence should be
directed to the Managing
Director, CMR,
Kemp House,
152-160 City Road, London EC1V
2N
Senior Executives
CMR is a worldwide network of senior executives. Join us to expand your career and business horizons.
Business Entrepreneurs
CMR has a complete range of resources & services provided by experts to help all businesses to grow and prosper.
Investors & Venturers
CMR has a continuous stream of business and funding propositions, which are matched to investor preferences. Join us - it's FREE!
FundEX
FundEX is CMR's worldwide stock market for small to medium sized companies and entrepreneurs to raise new capital.
Interim & Permanent Management
Many of CMR's executives can be recruited on an interim, permanent or NED basis.
Login
Main CMR Intranet members only
Regional Intranets
Tue, 25 Nov 2025 09:15:00 +0000 Ethiopian Volcano Erupts For First Time In 10,000 Years
Ethiopian Volcano Erupts For First Time In 10,000 Years
A dormant volcano in Ethiopia's Afar region, Hayli Gubbi, erupted on Sunday for the first time in thousands of years, blasting ash and smoke up to 9 miles into the atmosphere,
Read more.....
Ethiopian Volcano Erupts For First Time In 10,000 Years
A dormant volcano in Ethiopia's Afar region, Hayli Gubbi, erupted on Sunday for the first time in thousands of years, blasting ash and smoke up to 9 miles into the atmosphere, disrupting air travel across the Red Sea toward Yemen and Oman.
Hayli Gubbi's sudden awakening after roughly 10,000 to 12,000 years may suggest the Afar region and the broader tectonic system beneath East Africa are becoming more geologically active, with new magma moving beneath the crust.
On X, the American Geographical Society posted satellite imagery showing a massive ash and sulfur dioxide plume drifting across the Red Sea toward Yemen.
Flight-tracking website Flightradar24 showed that commercial aircraft in the region are avoiding the toxic plume.
Questions remain over whether nearby dormant volcanoes could also awaken and enter an active cycle.
Tyler Durden
Tue, 11/25/2025 - 04:15 Close
Tue, 25 Nov 2025 08:30:00 +0000 The AfD's Co-Leader Declared That Poland Could Become A Threat To Germany
The AfD's Co-Leader Declared That Poland Could Become A Threat To Germany
The AfD's Co-Leader Declared That Poland Could Become A Threat To Germany
Authored by Andrew Korybko via Substack,
AfD co-leader Tino Chrupalla said during a recent appearance on public media that “Poland could also become a threat to us…We see that Poland’s interests differ from Germany’s…We are seeing double standards on the Nord Stream issue. Poland did not extradite a wanted criminal, a terrorist, to Germany.”
He’s not wrong, but he’s also not right for the reasons that people might think, namely the assumption that Poland might one day pose a military threat to Germany.
The present piece will clarify the matter.
It’s true that “Poland’s interests differ from Germany’s”, though not necessarily in the economic sense since Poland became a larger export market for Germany earlier this year than China, and Poland has benefited from the German-led EU’s subsidies (that benefit Germany even more though ). Their different interests largely pertain to the future of the EU, which Germany envisages becoming a federation under its leadership while Poland wants it to be a loose union of states that retain more of their sovereignty.
Nord Stream embodied these differences since Germany could have leveraged what would have been its leading energy role in the EU had the second pipeline come online to coerce the Central & Eastern European (CEE) countries into making more concessions on their sovereignty to Berlin-backed Brussels. Poland feared this scenario for self-evident reasons, while the US didn’t want the rise of a de facto German-led “Federation of Europe”, so they plotted together to prevent this from happening.
Poland’s Swinoujscie LNG terminal opened in 2015, and it’s now poised to serve as the entryway for US LNG into CEE as explained here , which will erode German influence there. In parallel, the US supports the Polish-led “Three Seas Initiative ” of more robust integration among the CEE states, which is one of the means through which Poland plans to revive its long-lost Great Power status . These aforesaid policies were then given an unprecedent boost after the Nord Stream attack that the US arguably orchestrated .
Had the Ukrainian Conflict ended as a result of spring 2022’s peace talks, then the opportunity for blowing up that pipeline would have closed, hence the importance of Poland aiding the UK in its efforts to convince Zelensky to keep fighting by allowing the unlimited transit of military aid to that end.
In the three years since that attack, the German economy greatly weakened , which Poland and the US expect to accelerate the erosion of German influence in CEE and facilitate its replacement with their influence.
Poland can’t replace Germany’s economic influence there even though it just became a $1 trillion economy , but the lopsided trade deal that the EU agreed to with the US could eventually see the latter doing so instead.
Polish influence can instead take the form of leading CEE’s containment of Russia now that it commands NATO’s third-largest army , thus creating a wedge between Germany and Russia like the US also wants, and rallying the region behind its vision of the EU’s vision in opposition to Germany’s.
Chrupalla was therefore correct in claiming that “Poland could also become a threat to [Germany]” since the successful implementation of the abovementioned grand strategy would shatter German hegemony over CEE. What he didn’t mention, and perhaps he hasn’t (yet?) realized it, is that the aforesaid is a joint Polish-US plan that’s been operational for years already. If it wasn’t for US support, Poland could never pose any strategic threat to Germany, so it’s really the US that already poses the greatest one of all to it.
Views expressed in this article are opinions of the author and do not necessarily reflect the views of ZeroHedge.
Tyler Durden
Tue, 11/25/2025 - 03:30 Close
Tue, 25 Nov 2025 07:45:00 +0000 UK Navy Intercepts, Shadows Russian Warships In English Channel
UK Navy Intercepts, Shadows Russian Warships In English Channel
The UK military continues monitoring and shadowing Russian 'research vessels' as well as naval ships off Britain's waters which are widel
Read more.....
UK Navy Intercepts, Shadows Russian Warships In English Channel
The UK military continues monitoring and shadowing Russian 'research vessels' as well as naval ships off Britain's waters which are widely understood to be Russian Navy intelligence collection ships.
In the latest development, British media reported Sunday that the Royal Navy recently intercepted two Russian naval vessels as they passed through the English Channel , according to a statement by the UK Defense Ministry.
Illustrative: Russian MoD image
The ships have been identified as the corvette Stoikiy and the tanker Yelnya, the former which is part of Russia’s Black Sea Fleet. The Royal Navy's patrol vessel HMS Severn was dispatched to shadow the Russian vessels as they moved west through the Dover Strait - which is the northwest part of the channel just before entering the North Sea.
BBC and CNN have indicated that for part of the shadow mission, monitoring of the Russian ships was handed off to NATO partner assets.
"In addition to the ships stationed around the UK coast, Britain has deployed three Poseidon surveillance aircraft to Iceland as part of a NATO mission patrolling for Russian ships and submarines in the North Atlantic and Arctic , the ministry said," CNN writes.
Britain's military has indicated that its HMS Severn continued tracking the corvette Stoikiy "from a distance" and at all times remained prepared to react to "any unexpected activity" .
Russia's embassy in London has meanwhile emphasized that Russian vessels international waters in reality "pose no threat to Britain’s security ." This comes after alleged incidents where Russian vessels aimed lasers as British warplanes operating above to monitor the ships' activities.
And last week the embassy firmly stated, "London, with its Russophobic path and increasing militaristic hysteria leads to further degradation European security, providing the premise for new dangerous situations."
Western allies believe Russian naval 'research vessels' like the Yantar are assisting with Moscow-backed 'sabotage campaigns' in northern European waters. For example the last couple years have seen allegations of Russian vessels cutting telecoms cables under the North Sea and elsewhere.
"We call on the British side to hold off taking any destructive steps which might aggravate the crisis situation on the European continent," the embassy added.
Russian officials and institutions have been under intense scrutiny in the UK ever since the Ukraine war began, and relations are steadily worsening. London officials believe that Russian government-linked entities in Britain could be involved in 'sabotage operations' targeting NATO and Europe's defense sector.
Tyler Durden
Tue, 11/25/2025 - 02:45 Close
Tue, 25 Nov 2025 07:00:00 +0000 "Absolutely Breathtaking" - Exposing The Censorship Industrial Complex's Power Grip In Germany
"Absolutely Breathtaking" - Exposing The Censorship Industrial Complex's Power Grip In Germany
"Absolutely Breathtaking" - Exposing The Censorship Industrial Complex's Power Grip In Germany
Authored by Greg Collard via Racket.news,
New liber-net report maps an expansive network of government and private censors across Germany...
Many organizations and federal agencies involved in censoring Americans under the guise of mis/disinformation have shut down in the last couple years. Racket’s Twitter Files exposed the level of censorship slime oozing from organizations such as the Stanford Internet Observatory , the Election Integrity Project , and the Virality Project . On the government side of things, there was the Global Engagement Center , the Foreign Influence Task Force , and the Cybersecurity and Infrastructure Security Agency (CISA), which still exists but is no longer involved in mis/disnfo work.
That’s not to say America is perfect when it comes to free speech, but as Sen. Rand Paul said in September , “throughout government, the censorship apparatus that Biden had put in place is gone.”
However, if you look to Germany, the strongest economic power in the European Union, it’s easy to see where America was going. It has about 330 organizations working with federal and state levels of government to suppress speech and about 425 grants — mostly from the government — that fund this work, according to research from liber-net , a free speech group that tracks censorship.
The most high-profile cases of German censorship, at least in America, have been raids of people who authorities determined had engaged in “digital violence” for offenses that include insulting someone. These raids were the subject of a high-Zprofile “60 Minutes” segment last February. Prosecutors and police largely depend on a system of government-certified and government-funded “flaggers.”
While these incidents understandably get the most attention, the censorship apparatus is much more deeply ingrained in German society, says Andrew Lowenthal, the CEO of liber-net.
“Germany is the most important country doing this type of content controls work in the entirety of the EU and I would argue has a significant influence on the EU. There’s not really any light between civil society and the government.”
As a result, there’s a constant “atmosphere of intimidation,” says Thomas Geisel, a former mayor of Dusseldorf and now a member of the European Parliament.
“People are afraid to speak their mind. That people always have to find some sort of way of expressing their mind in a politically correct way has created a narrower space for discourse, and I think that is really threatening our democracy.”
Liber-net’s report includes a searchable database of organizations involved in content control and the grants that fund their work, categorized from 1-5 flags, with five being the worst for its censorship advocacy.
The report indicates that government funding for content controls peaked in 2023 at about $36 million (converted from euros, as all dollar amounts in this article are) among the German federal and state governments as well as the EU. While the combined funding among the three has decreased to around $23 million, the amount from the German federal government remains roughly the same and has increased since last year.
Source: liber-net. Amounts in euros.
The “subtle instruments”
In some cases, government money goes to private organizations that act as a middleman for the government. For example, all of the money the private German Research Foundation distributes is provided by the German federal and state governments, and the EU (1%). The foundation awards money to various mis/disinfo causes. In June, it even requested proposals to expand “the term ‘disinformation’ to include claims that may be factually true,” according to liber-net.
The report says the German government has certified four organizations as flaggers, or in the government’s framing , organizations with “specialized expertise and experience in identifying and reporting illegal content.” The Federal Network Agency (Bundesnetzagentur), which enforces the EU’s controversial Digital Services Act, awards grants to these flaggers.
Among them is a group called REspect! . It’s been a government-certified “trusted flagger ” since October 2024, which means “deletion requests submitted by REspect! to the platforms must be given priority and processed within a shorter time,” according to a REspect! report . The group received funding directly from a government grant program called “Demokratie leben!,” which translates to Live Democracy!
REspect! has an online portal for people to submit their complaints, which are then forwarded to the proper authorities such as the Bavarian Police. That was the case for one person who had the audacity to call a German politician a “Dummschwätzer” on Facebook — which roughly translates to “blowhard” — as documented by the Bundestag in a list of attacks against politicians and political parties.
Translation via ChatGPT: “Report of an offense via the online portal of the Bavarian Police through the reporting office REspect! Reporting an online insult. The GS (Green Party member of the state parliament) was called a ‘Dummschwätzer’ (‘blabbermouth’ / ‘loud-mouthed idiot’) on Facebook.”
Another “trusted flagger” is HateAid, which received its certification in June after proving its bona fides in other aspects of Germany’s censorship apparatus since its founding in 2018, garnering at least $5.2 million in government funding, according to liber-net.
HateAid purports to be a defender of free speech . From its homepage:
However, HateAid, armed with public funding, will go after people who express the wrong opinion. Take the Russia-Ukraine war, according to liber-net:
HateAid has also notably pursued the censorship of those protesting Berlin’s backing of Kiev; it has classified the hashtag “Kriegstreiber” (or “Warmonger”) as “pro-Kremlin propaganda” whose effect is to “undermine the credibility of politicians” supporting Berlin’s war efforts.
HateAid even warns that the “warmonger” hashtag from “small pro-Kremlin accounts” can shape public debate because they respond to channels with large audiences, such as those of politicians and journalists (bold emphasis is HateAid):
These are retweeted or commented on preferentially in order to spread the narrative of the “warmonger”. In this way, even small accounts can share propaganda with enormous reach . As a result, they enter the centre of society, where they are also perceived and taken up by citizens who are reading along. In effect, they are free riders on the reach of these accounts and can thus shape the public debate.
The CEO of HateAid, Josephine Ballon, was part of the 60 Minutes piece mentioned above. She declared that “free speech needs boundaries.”
Without speech boundaries, Ballon argued that people will be afraid to participate in political discussions.
“This is not only a fear, it’s already taking place. Already half of the Internet users in Germany are afraid to express their political opinion, and they rarely participate in public debates online anymore.”
There lies the rub: is that because people are afraid of being criticized or attacked online, or because people are afraid of being turned over to authorities by the government’s “trusted flaggers” such as HateAid and REspect!?
Geisel says Germany’s censorship apparatus is having a similar effect as anti-speech laws in Russia, which, after invading Ukraine, made “discrediting the armed forces” a crime .
“It’s a lot more subtle [in Germany], but the result is very similar in that you simply don’t speak your mind anymore because there are more subtle instruments preventing you from speaking your mind.”
He points to the highly publicized case of political scientist Ulrike Guerot as an example. Guerot was a political science professor at the University of Bonn until she was fired in 2023 after outrage over a book she co-authored, “Endspiel Europa,” which translates to “Endgame Europe.” Guerot argued that “Ukraine had the role of starting a war with Russia on behalf of the West, which was then to be backed militarily and logistically by NATO member states…”
Officially, Guerot was fired for plagiarism, although she maintains there were only minor problems and that the accusations were a pretext for firing her over her views.
Guerot said liber-net’s report is eye-opening because it maps out a censorship network that makes clear to her the problem is worse than she realized.
“It draws the line between the dots and you say, ‘Ah, this is connected to this and they got the money from there.’ And that’s why it's called a censorship network. It’s like a spider net, and there are the dots and it’s all connected. And in this respect, I must admit it was absolutely breathtaking. ”
Tyler Durden
Tue, 11/25/2025 - 02:00 Close
Tue, 25 Nov 2025 04:25:00 +0000 Trump Has Called Europe's Bluff
Trump Has Called Europe's Bluff
Trump Has Called Europe's Bluff
Authored by Wolfgang Munchau via UnHerd.com,
The 28-point plan the White House negotiated with the Kremlin is not a done deal. It’s not even close. It is a blueprint, no more, no less. In any case, Trump is an unpredictable player — he could back out at any moment. But this time, I don’t think he will.
The plan, first circulated on a Telegram channel , is clearly not a great one for Ukraine. But, equally, it isn’t a “capitulation” and those who have described it as such don’t really want a deal. Ukraine will be able to improve on it. But, admittedly, not by much. “You don’t have the cards ,” Trump once told Zelensky. Unfortunately, after the recent corruption scandal, his hand is weaker than ever.
Over the past three years, US officials have repeatedly told me that Ukraine has no chance of winning the war. And after America withdrew support earlier this year, it was clear that they had a point: Europe was in no position to plug the gap. Europeans might be the self-righteous defenders of the fast-collapsing multilateral world order, but history will record that when push came to shove, they weren’t ready to put their money where their mouth was. On average, the total support for Ukraine was around €4 billion per month during the first half of the year. In July and August, it collapsed to under €1 billion per month, according to the Kiel Institute . No major European country has been willing to cut spending or raise taxes to fund Ukraine meaningfully. The Europeans’ strategy, such as it was beyond photo-ops with Zelensky, was to keep the Russians fighting until they got tired.
Unfortunately, America tired first. And Europe had no Plan B.
Now Europe is out of money and out of ideas. And Trump does have a plan. He has been playing the long game. His tough talk against Vladimir Putin was merely tactical, intended to mask a long-term strategy to force an end to the war. As Phillips O’Brien has suggested, in his “Long Con” analysis , even Trump’s secondary oil sanctions were part of this gambit. These were supposed to take effect on 21 November. And yet nothing happened. India and China can continue buy Russian oil with impunity. The sanctions were never serious.
Trump has a singular priority — to end the war, whatever it takes. And he has two major advantages in this bid. One is Ukraine and Europe’s military dependence on the US. The other is America’s unique status as the only influential Western power with direct diplomatic channels with Moscow. The Europeans committed a huge strategic blunder when they simultaneously ended their conversations with Vladimir Putin.
And so Trump’s 28-point plan was negotiated by Steve Witkoff with his counterpart in Russia, Kirill Dmitriev. Admittedly, it does have the feel of a work in progress: the leaked version was written in Russian and, when translated into English, is clumsy. It is detailed, but by no means a formally agreed text.
There are, though, some non-negotiable elements. One is the territorial agreement which would give Russia a part of Ukraine it does not yet occupy. Russia already holds almost 90% of the entire Donbas region — all of Luhansk, and roughly three quarters of Donetsk. Trump’s peace plan would hand Russia the remaining territory of Donetsk, along with the 200,000 Ukrainians who are still resident in the Ukrainian-controlled parts of the oblast. Under the plan, the territory would be demilitarised and become part of a buffer zone.
Trump’s team accepted this because they concluded, correctly, in my view, that without it there would be no deal. Putin would have continued to fight and eventually captured more territory. Russia has been making advances: it recently managed to occupy the vital frontline town of Pokrovsk. It could take another year for Russia to capture the remainder of Donetsk, before it went for the big prize: Zaporizhzhia, a city with approximately 700,000 inhabitants, and the capital of the region that bears the same name. At that point, Ukraine’s future independence could could no longer be assured.
This peace deal, though, is not as one-sided as its critics say. It formally recognises the sovereignty of Ukraine, and its right to join the EU. It also allows Ukraine to maintain an army, capped at a reasonable 600,000 troops. Nor does restrict Nato countries from providing further assistance, except for certain weapon categories like long-range missiles.
But there are some real curveballs. I almost fell off my chair when I read Point 14, which suggests the investing of $100bn of Russia’s frozen assets in the reconstruction of Ukraine — with the US taking half of the profits. This is classic Trump: playing commercial games which are beyond the imagination of our European diplomats. In addition, Europe would be obliged to pay $100bn of assistance from their own pockets. There will be a US-Russian investment fund to finance joint American–Russian projects, with profits shared.
But most importantly, the deal forces Europe to unfreeze the $200 billion in Russian assets currently held in European accounts, mostly in Belgium. This is a bitter pill; Europe had hoped it could use the Russian money as collateral for Ukraine loans. Trump has no authority to force Europe to release the funds — Friedrich Merz already said No to this demand — but he could make life difficult if it refuses.
Europe’s only semi-coherent strategy regarding Ukraine had been to withhold these assets as leverage for future reparations — a plan built on the fiction that Ukraine would win this war.
But if Russia and Ukraine end up agreeing a deal, this scheme would be rendered unworkable, since would be a tool with which the Europeans could sabotage the deal.
Another red-line in the peace deal is the gradual lifting of sanctions. Readmitting Russia to the Group of Seven advanced industrial nations — making it the G8 again — would be painful for the Europeans. Russia was expelled in 2014, after Russia annexed Crimea. A revived G8 would effectively be ruled by Trump and Putin.
No surprise, then, that EU leaders at the G20 summit in South Africa this weekend issued a statement to say that they wanted to make a counterproposal, designed primarily to frustrate Trump’s plan. They insisted on a ceasefire — a non-starter. And after senior US and European officials met in Geneva on Sunday, they said they had made some progress but gave no details.
Ukraine, by contrast, made some positive noises about a new version of the deal. The Kyiv Independent quoted a senior US official saying that the plan had been drawn up with Rustem Umerov, the secretary of the National and Security Council of Ukraine, and one of Zelensky’s closest aides. Umerov reportedly agreed to the majority of the deal, after making several modifications, which he then showed to Zelensky.
Domestic attitudes in Ukraine are also shifting. I noted a post from Iuliia Mendel, Zelensky’s former press secretary and staunch defender of Ukraine. Over the weekend she tweeted: “My country is bleeding out. Many who reflexively oppose every peace proposal believe they are defending Ukraine. With all respect, that is the clearest proof they have no idea what is actually happening on the front lines and inside the country right now.” She is absolutely right in her observation that the loudest supporters of Ukraine in Europe are those with no understanding whatsoever of the military reality on the ground.
So will the Europeans encourage Zelensky to keep on fighting? I am sure they will try. But I am not sure they will succeed. Ultimately, they will back down.
Because if Ukraine were to reject the deal, Trump would formally disconnect his remaining military aid and intelligence support to Ukraine. The country relies on this as its early warning system for any incoming attacks as well as guiding its own strikes on Russian infrastructure.
Trump could go even further and renounce US responsibility for Europe’s security, on the grounds that the continent is taking unacceptable risks. The Europeans know this, of course. While outwardly, they may give an impression of defiance, their actions suggest otherwise. After Trump imposed tariffs on European imports this summer, the EU folded and agreed to a big increase in military spending. If Europe really wanted independence from the US, it would have created a defence procurement union, with a “Buy European” mandate and started to reorganise its militaries. None of this is happening. Nor will it. This is the problem with the multilateral crowd. They care too much about procedures.
We can expect a good deal of huffing and puffing coming from European capitals over the next few days. Leaders will insist that they are retain sovereign decision-making. Legally, this is true. The US has no rights to decide the fate of Russian assets held in Europe.
But this is not a legal dispute, it is a political one. Europe never had a viable strategy for the war — and now it’s becoming clear it has no strategy for peace either. The Europeans have no choice but to make a deal: they have no cards left to play either.
Tyler Durden
Mon, 11/24/2025 - 23:25 Close
Tue, 25 Nov 2025 04:00:00 +0000 The Google TPU: The Chip Made For The AI Inference Era
The Google TPU: The Chip Made For The AI Inference Era
The Google TPU: The Chip Made For The AI Inference Era
By UncoverAlpha
As I find the topic of Google TPUs extremely important, I am publishing a comprehensive deep dive, not just a technical overview, but also strategic and financial coverage of the Google TPU.
Topics covered:
The history of the TPU and why it all even started?
The difference between a TPU and a GPU?
Performance numbers TPU vs GPU?
Where are the problems for the wider adoption of TPUs
Google’s TPU is the biggest competitive advantage of its cloud business for the next 10 years
How many TPUs does Google produce today, and how big can that get?
Gemini 3 and the aftermath of Gemini 3 on the whole chip industry
Let’s dive into it.
The history of the TPU and why it all even started?
The story of the Google Tensor Processing Unit (TPU) begins not with a breakthrough in chip manufacturing, but with a realization about math and logistics. Around 2013, Google’s leadership—specifically Jeff Dean, Jonathan Ross (the CEO of Groq), and the Google Brain team—ran a projection that alarmed them. They calculated that if every Android user utilized Google’s new voice search feature for just three minutes a day, the company would need to double its global data center capacity just to handle the compute load.
At the time, Google was relying on standard CPUs and GPUs for these tasks. While powerful, these general-purpose chips were inefficient for the specific heavy lifting required by Deep Learning: massive matrix multiplications. Scaling up with existing hardware would have been a financial and logistical nightmare.
This sparked a new project. Google decided to do something rare for a software company: build its own custom silicon. The goal was to create an ASIC (Application-Specific Integrated Circuit) designed for one job only: running TensorFlow neural networks.
Key Historical Milestones:
2013-2014: The project moved really fast as Google both hired a very capable team and, to be honest, had some luck in their first steps. The team went from design concept to deploying silicon in data centers in just 15 months—a very short cycle for hardware engineering.
2015: Before the world knew they existed, TPUs were already powering Google’s most popular products. They were silently accelerating Google Maps navigation, Google Photos, and Google Translate.
2016: Google officially unveiled the TPU at Google I/O 2016.
This urgency to solve the “data center doubling” problem is why the TPU exists. It wasn’t built to sell to gamers or render video; it was built to save Google from its own AI success. With that in mind, Google has been thinking about the »costly« AI inference problems for over a decade now. This is also one of the main reasons why the TPU is so good today compared to other ASIC projects.
The difference between a TPU and a GPU?
To understand the difference, it helps to look at what each chip was originally built to do. A GPU is a “general-purpose” parallel processor, while a TPU is a “domain-specific” architecture.
The GPUs were designed for graphics. They excel at parallel processing (doing many things at once), which is great for AI. However, because they are designed to handle everything from video game textures to scientific simulations, they carry “architectural baggage.” They spend significant energy and chip area on complex tasks like caching, branch prediction, and managing independent threads.
A TPU, on the other hand, strips away all that baggage. It has no hardware for rasterization or texture mapping. Instead, it uses a unique architecture called a Systolic Array.
The “Systolic Array” is the key differentiator. In a standard CPU or GPU, the chip moves data back and forth between the memory and the computing units for every calculation. This constant shuffling creates a bottleneck (the Von Neumann bottleneck).
In a TPU’s systolic array, data flows through the chip like blood through a heart (hence “systolic”).
It loads data (weights) once.
It passes inputs through a massive grid of multipliers.
The data is passed directly to the next unit in the array without writing back to memory.
What this means, in essence, is that a TPU, because of its systolic array, drastically reduces the number of memory reads and writes required from HBM. As a result, the TPU can spend its cycles computing rather than waiting for data.
Google’s new TPU design, also called Ironwood also addressed some of the key areas where a TPU was lacking:
They enhanced the SparseCore for efficiently handling large embeddings (good for recommendation systems and LLMs)
It increased HBM capacity and bandwidth (up to 192 GB per chip). For a better understanding, Nvidia’s Blackwell B200 has 192GB per chip, while Blackwell Ultra, also known as the B300, has 288 GB per chip.
Improved the Inter-Chip Interconnect (ICI) for linking thousands of chips into massive clusters, also called TPU Pods (needed for AI training as well as some time test compute inference workloads). When it comes to ICI, it is important to note that it is very performant with a Peak Bandwidth of 1.2 TB/s vs Blackwell NVLink 5 at 1.8 TB/s. But Google’s ICI, together with its specialized compiler and software stack, still delivers superior performance on some specific AI tasks.
The key thing to understand is that because the TPU doesn’t need to decode complex instructions or constantly access memory, it can deliver significantly higher Operations Per Joule.
For scale-out, Google uses Optical Circuit Switch (OCS) and its 3D torus network, which compete with Nvidia’s InfiniBand and Spectrum-X Ethernet. The main difference is that OCS is extremely cost-effective and power-efficient as it eliminates electrical switches and O-E-O conversions, but because of this, it is not as flexible as the other two. So again, the Google stack is extremely specialized for the task at hand and doesn’t offer the flexibility that GPUs do.
Performance numbers TPU vs GPU?
As we defined the differences, let’s look at real numbers showing how the TPU performs compared to the GPU. Since Google isn’t revealing these numbers, it is really hard to get details on performance. I studied many articles and alternative data sources, including interviews with industry insiders, and here are some of the key takeaways.
The first important thing is that there is very limited information on Google’s newest TPUv7 (Ironwood), as Google introduced it in April 2025 and is just now starting to become available to external clients (internally, it is said that Google has already been using Ironwood since April, possibly even for Gemini 3.0 .). And why is this important if we, for example, compare TPUv7 with an older but still widely used version of TPUv5p based on Semianalysis data:
TPUv7 produces 4,614 TFLOPS(BF16) vs 459 TFLOPS for TPUv5p
TPUv7 has 192GB of memory capacity vs TPUv5p 96GB
TPUv7 memory Bandwidth is 7,370 GB/s vs 2,765 for v5p
We can see that the performance leaps between v5 and v7 are very significant. To put that in context, most of the comments that we will look at are more focused on TPUv6 or TPUv5 than v7.
Based on analyzing a ton of interviews with Former Google employees, customers, and competitors (people from AMD, NVDA & others), the summary of the results is as follows.
Most agree that TPUs are more cost-effective compared to Nvidia GPUs, and most agree that the performance per watt for TPUs is better. This view is not applicable across all use cases tho.
A Former Google Cloud employee:
"If it is the right application, then they can deliver much better performance per dollar compared to GPUs. They also require much lesser energy and produces less heat compared to GPUs. They’re also more energy efficient and have a smaller environmental footprint, which is what makes them a desired outcome.
The use cases are slightly limited to a GPU, they’re not as generic, but for a specific application, they can offer as much as 1.4X better performance per dollar, which is pretty significant saving for a customer that might be trying to use GPU versus TPUs." - source: AlphaSense
Similarly, a very insightful comment from a Former Unit Head at Google around TPUs materially lowering AI-search cost per query vs GPUs:
"TPU v6 is 60-65% more efficient than GPUs, prior generations 40-45%"
This interview was in November 2024, so the expert is probably comparing the v6 TPU with the Nvidia Hopper. Today, we already have Blackwell vs V7.
Many experts also mention the speed benefit that TPUs offer, with a Former Google Head saying that TPUs are 5x faster than GPUs for training dynamic models (like search-like workloads).
There was also a very eye-opening interview with a client who used both Nvidia GPUs and Google TPUs as he describes the economics in great detail:
"If I were to use eight H100s versus using one v5e pod, I would spend a lot less money on one v5e pod. In terms of price point money, performance per dollar, you will get more bang for TPU. If I already have a code, because of Google’s help or because of our own work, if I know it already is going to work on a TPU, then at that point it is beneficial for me to just stick with the TPU usage.
In the long run, if I am thinking I need to write a new code base, I need to do a lot more work, then it depends on how long I’m going to train. I would say there is still some, for example, of the workload we have already done on TPUs that in the future because as Google will add newer generation of TPU, they make older ones much cheaper.
For example, when they came out with v4, I remember the price of v2 came down so low that it was practically free to use compared to any NVIDIA GPUs.
Google has got a good promise so they keep supporting older TPUs and they’re making it a lot cheaper. If you don’t really need your model trained right away, if you’re willing to say, “I can wait one week,” even though the training is only three days, then you can reduce your cost 1/5." - source: AlphaSense
Another valuable interview was with a current AMD employee, acknowledging the benefits of ASICs:
"I would expect that an AI accelerator could do about probably typically what we see in the industry. I’m using my experience at FPGAs. I could see a 30% reduction in size and maybe a 50% reduction in power vs a GPU."
We also got some numbers from a Former Google employee who worked in the chip segment:
"When I look at the published numbers, they (TPUs) are anywhere from 25%-30% better to close to 2x better, depending on the use cases compared to Nvidia. Essentially, there’s a difference between a very custom design built to do one task perfectly versus a more general purpose design."
What is also known is that the real edge of TPUs lies not in the hardware but in the software and in the way Google has optimized its ecosystem for the TPU.
A lot of people mention the problem that every Nvidia "competitor" like the TPU faces, which is the fast development of Nvidia and the constant "catching up" to Nvidia problem. This month a former Google Cloud employee addressed that concern head-on as he believes the rate at which TPUs are improving is faster than the rate at Nvidia:
"The amount of performance per dollar that a TPU can generate from a new generation versus the old generation is a much significant jump than Nvidia"
In addition, the recent data from Google’s presentation at the Hot Chips 2025 event backs that up, as Google stated that the TPUv7 is 100% better in performance per watt than their TPUv6e (Trillium).
Even for hard Nvidia advocates, TPUs are not to be shrugged off easily, as even Jensen thinks very highly of Google’s TPUs. In a podcast with Brad Gerstner, he mentioned that when it comes to ASICs, Google with TPUs is a "special case" . A few months ago, we also got an article from the WSJ saying that after the news publication The Information published a report that stated that OpenAI had begun renting Google TPUs for ChatGPT, Jensen called Altman, asking him if it was true, and signaled that he was open to getting the talks back on track (investment talks). Also worth noting was that Nvidia’s official X account posted a screenshot of an article in which OpenAI denied plans to use Google’s in-house chips. To say the least, Nvidia is watching TPUs very closely .
Ok, but after looking at some of these numbers, one might think, why aren’t more clients using TPUs?
Where are the problems for the wider adoption of TPUs
The main problem for TPUs adoption is the ecosystem. Nvidia’s CUDA is engraved in the minds of most AI engineers, as they have been learning CUDA in universities. Google has developed its ecosystem internally but not externally, as it has used TPUs only for its internal workloads until now. TPUs use a combination of JAX and TensorFlow, while the industry skews to CUDA and PyTorch (although TPUs also support PyTorch now). While Google is working hard to make its ecosystem more supportive and convertible with other stacks, it is also a matter of libraries and ecosystem formation that takes years to develop.
It is also important to note that, until recently, the GenAI industry’s focus has largely been on training workloads. In training workloads, CUDA is very important, but when it comes to inference, even reasoning inference, CUDA is not that important, so the chances of expanding the TPU footprint in inference are much higher than those in training (although TPUs do really well in training as well – Gemini 3 the prime example).
The fact that most clients are multi-cloud also poses a challenge for TPU adoption, as AI workloads are closely tied to data and its location (cloud data transfer is costly). Nvidia is accessible via all three hyperscalers, while TPUs are available only at GCP so far. A client who uses TPUs and Nvidia GPUs explains it well:
"Right now, the one biggest advantage of NVIDIA, and this has been true for past three companies I worked on is because AWS, Google Cloud and Microsoft Azure, these are the three major cloud companies.
Every company, every corporate, every customer we have will have data in one of these three. All these three clouds have NVIDIA GPUs. Sometimes the data is so big and in a different cloud that it is a lot cheaper to run our workload in whatever cloud the customer has data in.
I don’t know if you know about the egress cost that is moving data out of one cloud is one of the bigger cost. In that case, if you have NVIDIA workload, if you have a CUDA workload, we can just go to Microsoft Azure, get a VM that has NVIDIA GPU, same GPU in fact, no code change is required and just run it there.
With TPUs, once you are all relied on TPU and Google says, “You know what? Now you have to pay 10X more,” then we would be screwed, because then we’ll have to go back and rewrite everything. That’s why. That’s the only reason people are afraid of committing too much on TPUs. The same reason is for Amazon’s Trainium and Inferentia." - source: AlphaSense
These problems are well known at Google, so it is no surprise that internally, the debate over keeping TPUs inside Google or starting to sell them externally is a constant topic. When keeping them internally, it enhances the GCP moat, but at the same time, many former Google employees believe that at some point, Google will start offering TPUs externally as well, maybe through some neoclouds, not necessarily with the biggest two competitors, Microsoft and Amazon. Opening up the ecosystem, providing support, etc., and making it more widely usable are the first steps toward making that possible.
A former Google employee also mentioned that Google last year formed a more sales-oriented team to push and sell TPUs, so it’s not like they have been pushing hard to sell TPUs for years; it is a fairly new dynamic in the organization.
Google’s TPU is the biggest competitive advantage of its cloud business for the next 10 years
The most valuable thing for me about TPUs is their impact on GCP. As we witness the transformation of cloud businesses from the pre-AI era to the AI era, the biggest takeaway is that the industry has gone from an oligopoly of AWS, Azure, and GCP to a more commoditized landscape, with Oracle, Coreweave, and many other neoclouds competing for AI workloads. The problem with AI workloads is the competition and Nvidia’s 75% gross margin, which also results in low margins for AI workloads. The cloud industry is moving from a 50-70% gross margin industry to a 20-35% gross margin industry. For cloud investors, this should be concerning, as the future profile of some of these companies is more like that of a utility than an attractive, high-margin business. But there is a solution to avoiding that future and returning to a normal margin: the ASIC.
The cloud providers who can control the hardware and are not beholden to Nvidia and its 75% gross margin will be able to return to the world of 50% gross margins. And there is no surprise that all three AWS, Azure, and GCP are developing their own ASICs . The most mature by far is Google’s TPU, followed by Amazon’s Trainum, and lastly Microsoft’s MAIA (although Microsoft owns the full IP of OpenAI’s custom ASICs, which could help them in the future).
While even with ASICs you are not 100% independent, as you still have to work with someone like Broadcom or Marvell, whose margins are lower than Nvidia’s but still not negligible, Google is again in a very good position. Over the years of developing TPUs, Google has managed to control much of the chip design process in-house. According to a current AMD employee, Broadcom no longer knows everything about the chip. At this point, Google is the front-end designer (the actual RTL of the design) while Broadcom is only the backend physical design partner. Google, on top of that, also, of course, owns the entire software optimization stack for the chip, which makes it as performant as it is. According to the AMD employee, based on this work split, he thinks Broadcom is lucky if it gets a 50-point gross margin on its part.
Without having to pay Nvidia for the accelerator, a cloud provider can either price its compute similarly to others and maintain a better margin profile or lower costs and gain market share. Of course, all of this depends on having a very capable ASIC that can compete with Nvidia. Unfortunately, it looks like Google is the only one that has achieved that, as the number one-performing model is Gemini 3 trained on TPUs. According to some former Google employees, internally, Google is also using TPUs for inference across its entire AI stack, including Gemini and models like Veo. Google buys Nvidia GPUs for GCP, as clients want them because they are familiar with them and the ecosystem, but internally, Google is full-on with TPUs.
As the complexity of each generation of ASICs increases, similar to the complexity and pace of Nvidia, I predict that not all ASIC programs will make it. I believe outside of TPUs, the only real hyperscaler shot right now is AWS Trainium, but even that faces much bigger uncertainties than the TPU. With that in mind, Google and its cloud business can come out of this AI era as a major beneficiary and market-share gainer.
Recently, we even got comments from the SemiAnalysis team praising the TPU:
"Google’s silicon supremacy among hyperscalers is unmatched, with their TPU 7th Gen arguably on par with Nvidia Blackwell. TPU powers the Gemini family of models which are improving in capability and sit close to the pareto frontier of $ per intelligence in some tasks" - source: SemiAnalysis
How many TPUs does Google produce today, and how big can that get?
Here are the numbers that I researched...
Continue reading at uncoveralpha.com
Tyler Durden
Mon, 11/24/2025 - 23:00 Close
Tue, 25 Nov 2025 03:35:00 +0000 The UK And Canada Lead The West's Descent Into Digital Authoritarianism
The UK And Canada Lead The West's Descent Into Digital Authoritarianism
The UK And Canada Lead The West's Descent Into Digital Authoritarianism
Authored by Sonia Elijah via The Brownstone Institute,
“Big Brother is watching you.”
These chilling words from George Orwell’s dystopian masterpiece, 1984 , no longer read as fiction but are becoming a bleak reality in the UK and Canada—where digital dystopian measures are unravelling the fabric of freedom in two of the West’s oldest democracies.
Under the guise of safety and innovation, the UK and Canada are deploying invasive tools that undermine privacy, stifle free expression, and foster a culture of self-censorship. Both nations are exporting their digital control frameworks through the Five Eyes alliance, a covert intelligence-sharing network uniting the UK, Canada, US, Australia, and New Zealand, established during the Cold War.
Simultaneously, their alignment with the United Nations’ Agenda 2030 , particularly Sustainable Development Goal (SDG) 16.9—which mandates universal legal identity by 2030—supports a global policy for digital IDs, such as the UK’s proposed Brit Card and Canada’s Digital Identity Program, which funnel personal data into centralized systems under the pretext of “efficiency and inclusion.” By championing expansive digital regulations, such as the UK’s Online Safety Act and Canada’s pending Bill C-8, which prioritize state-defined “safety” over individual liberties, both nations are not just embracing digital authoritarianism—they’re accelerating the West’s descent into it.
The UK’s Digital Dragnet
The United Kingdom has long positioned itself as a global leader in surveillance. The British spy agency, Government Communications Headquarters (GCHQ), runs the formerly secret mass surveillance programme, code-named Tempora , operational since 2011, which intercepts and stores vast amounts of global internet and phone traffic by tapping into transatlantic fibre-optic cables. Knowledge of its existence only came about in 2013, thanks to the bombshell documents leaked by the former National Security Agency (NSA) intelligence contractor and whistleblower, Edward Snowden. “It’s not just a US problem. The UK has a huge dog in this fight,” Snowden told the Guardian in a June 2013 report. “They [GCHQ] are worse than the US.”
Following that is the Investigatory Powers Act (IPA) 2016, also dubbed the “Snooper’s Charter,” which mandates that internet service providers store users’ browsing histories, emails, texts, and phone calls for up to a year. Government agencies, including police and intelligence services (like MI5, MI6, and GCHQ) can access this data without a warrant in many cases, enabling bulk collection of communications metadata. This has been criticized for enabling mass surveillance on a scale that invades everyday privacy.
Recent expansions under the Online Safety Act (OSA) further empower authorities to demand backdoors to encrypted apps like WhatsApp, potentially scanning private messages for vaguely defined “harmful” content—a move critics like Big Brother Watch , a privacy advocacy group, decry as a gateway to mass surveillance. The OSA, which received Royal Assent on October 26, 2023, represents a sprawling piece of legislation by the UK government to regulate online content and “protect” users, particularly children, from “illegal and harmful material.”
Implemented in phases by Ofcom, the UK’s communications watchdog, it imposes duties on a vast array of internet services, including social media, search engines, messaging apps, gaming platforms, and sites with user-generated content, forcing compliance through risk assessments and hefty fines. By July 2025, the OSA was considered “fully in force” for most major provisions. This sweeping regime, aligned with global surveillance trends via Agenda 2030’s push for digital control, threatens to entrench a state-sanctioned digital dragnet, prioritizing “safety” over fundamental freedoms.
Elon Musk’s platform X has warned that the act risks “seriously infringing” on free speech, with the threat of fines up to £18 million or 10% of global annual turnover for non-compliance, encouraging platforms to censor legitimate content to avoid punishment. Musk took to X to express his personal view on the act’s true purpose: “suppression of the people.”
In late September, Imgur (an image-hosting platform popular for memes and shared media) made the decision to block UK users rather than comply with the OSA’s stringent regulations. This underscores the chilling effect such laws can have on digital freedom.
The act’s stated purpose is to make the UK “the safest place in the world to be online .” However, critics argue that it’s a brazen power grab by the UK government to increase censorship and surveillance, all the while masquerading as a noble crusade to “protect” users.
Another pivotal development is the Data (Use and Access) Act 2025 (DUAA) , which received Royal Assent in June. This wide-ranging legislation streamlines data protection rules to boost economic growth and public services but at the cost of privacy safeguards. It allows broader data sharing among government agencies and private entities, including for AI-driven analytics. For instance, it enables “smart data schemes” where personal information from banking, energy, and telecom sectors can be accessed more easily, seemingly for consumer benefits like personalized services—but raising fears of unchecked profiling.
Cybersecurity enhancements further expand the UK’s pervasive surveillance measures. The forthcoming Cyber Security and Resilience Bill , announced in the July 2024 King’s Speech and slated for introduction by year’s end, expands the Network and Information Systems (NIS) Regulations to critical infrastructure, mandating real-time threat reporting and government access to systems. This builds on existing tools like facial recognition technology, deployed extensively in public spaces. In 2025, trials in cities like London have integrated AI cameras that scan crowds in real time, linking to national databases for instant identification—evoking a biometric police state.
Source: BBC News
The New York Times reported: “British authorities have also recently expanded oversight of online speech, tried weakening encryption, and experimented with artificial intelligence to review asylum claims. The actions, which have accelerated under Prime Minister Keir Starmer with the goal of addressing societal problems, add up to one of the most sweeping embraces of digital surveillance and internet regulation by a Western democracy.”
Compounding this, UK police arrest over 30 people a day for “offensive” tweets and online messages, per The Times , often under vague laws, fuelling justifiable fears of Orwell’s thought police.
Yet, of all the UK’s digital dystopian measures, none has ignited greater fury than Prime Minister Starmer’s mandatory “Brit Card” digital ID—a smartphone-based system effectively turning every citizen into a tracked entity.
First announced on September 4, as a tool to “tackle illegal immigration and strengthen border security,” but rapidly the Brit Card’s scope ballooned through function-creep to envelop everyday essentials like welfare, banking, and public access. These IDs, stored on smartphones containing sensitive data like photos, names, dates of birth, nationalities, and residency status, are sold “as the front door to all kinds of everyday tasks ,” a vision championed by the Tony Blair Institute for Global Change—and echoed by Work and Pensions Secretary Liz Kendall MP in her October 13 parliamentary speech.
Source: TheBritishIntel
This digital shackles system has sparked fierce resistance across the UK. A scathing letter , led by independent MP Rupert Lowe and endorsed by nearly 40 MPs from diverse parties, denounces the government’s proposed mandatory “Brit Card” digital ID as “dangerous, intrusive, and profoundly un-British.” Conservative MP David Davis issued a stark warning , declaring that such systems “are profoundly dangerous to the privacy and fundamental freedoms of the British people.”
On X , Davis amplified his critique, citing a £14m fine imposed on Capita after hackers breached pension savers’ personal data, writing: “This is another perfect example of why the government’s digital ID cards are a terrible idea.” By early October, a petition opposing the proposal had garnered over 2.8 million signatures , reflecting widespread public outcry. The government, however, dismissed these objections, stating, “We will introduce a digital ID within this Parliament to address illegal migration, streamline access to government services, and improve efficiency. We will consult on details soon.”
Canada’s Surveillance Surge
Across the Atlantic, Canada’s surveillance surge under Prime Minister Mark Carney—former Bank of England head and World Economic Forum board member—mirrors the UK’s dystopian trajectory. Carney, with his globalist agenda, has overseen a slew of bills that prioritize “security” over sovereignty. Take Bill C-2 , An Act to amend the Customs Act , introduced June 17, 2025, which enables warrantless data access at borders and sharing with US authorities via CLOUD Act (Clarifying Lawful Overseas Use of Data Act) pacts—essentially handing Canadian citizens’ digital lives to foreign powers. Despite public backlash prompting proposed amendments in October, its core—enhanced monitoring of transactions and exports—remains ripe for abuse.
Complementing this, Bill C-8 , first introduced June 18, 2025, amends the Telecommunications Act to impose cybersecurity mandates on critical sectors like telecoms and finance. It empowers the government to issue secret orders compelling companies to install backdoors or weaken encryption, potentially compromising user security. These orders can mandate the cutoff of internet and telephone services to specified individuals without the need for a warrant or judicial oversight, under the vague premise of securing the system against “any threat.”
Opposition to this bill has been fierce. In a parliamentary speech, Canada’s Conservative MP Matt Strauss decried the bill’s sections 15.1 and 15.2 as granting “unprecedented, incredible power” to the government. He warned of a future where individuals could be digitally exiled—cut off from email, banking, and work—without explanation or recourse, likening it to a “digital gulag.”
Source: Video shared by Andrew Bridgen
The Canadian Constitution Foundation (CCF) and privacy advocates have echoed these concerns, arguing that the bill’s ambiguous language and lack of due process violate fundamental Charter rights, including freedom of expression, liberty, and protection against unreasonable search and seizure.
Bill C-8 complements the Online Harms Act (Bill C-63) , first introduced in February 2024, which demanded platforms purge content like child exploitation and hate speech within 24 hours, risking censorship with vague “harmful” definitions. Inspired by the UK’s OSA and the EU’s Digital Services Act (DSA), C-63 collapsed amid fierce backlash for its potential to enable censorship, infringe on free speech, and lack of due process. The CCF and Pierre Poilievre, calling it “woke authoritarianism,” led a 2024 petition with 100,000 signatures. It died during Parliament’s January 2025 prorogation after Justin Trudeau’s resignation.
These bills build on an alarming precedent: during the Covid era, Canada’s Public Health Agency admitted to tracking 33 million devices during lockdown—nearly the entire population—under the pretext of public health, a blatant violation exposed only through persistent scrutiny. The Communications Security Establishment (CSE) , empowered by the longstanding Bill C-59 , continues bulk metadata collection, often without adequate oversight . These measures are not isolated; they stem from a deeper rot, where pandemic-era controls have been normalized into everyday policy.
Canada’s Digital Identity Program , touted as a “convenient” tool for seamless access to government services, emulates the UK’s Brit Card and aligns with UN Agenda 2030’s SDG 16.9. It remains in active development and piloting phases, with full national rollout projected for 2027–2028.
“The price of freedom is eternal vigilance.” Orwell’s 1984 warns we must urgently resist this descent into digital authoritarianism—through petitions, protests, and demands for transparency—before a Western Great Firewall is erected, replicating China’s stranglehold that polices every keystroke and thought.
Republished from the author’s Substack
Tyler Durden
Mon, 11/24/2025 - 22:35 Close
Tue, 25 Nov 2025 03:10:00 +0000 JP Morgan, Who Had No Issues Banking Epstein, Abruptly Closes Strike CEO Jack Mallers' Account
JP Morgan, Who Had No Issues Banking Epstein, Abruptly Closes Strike CEO Jack Mallers' Account
JPMorgan Chase abruptly closed Strike CEO Jack Mallers’ personal accounts last month, giving him no warni
Read more.....
JP Morgan, Who Had No Issues Banking Epstein, Abruptly Closes Strike CEO Jack Mallers' Account
JPMorgan Chase abruptly closed Strike CEO Jack Mallers’ personal accounts last month, giving him no warning and offering only a cryptic explanation, according to Yahoo Finance .
Mallers posted on X that “Last month, J.P. Morgan Chase threw me out of the bank,” noting how odd it was given that “My dad has been a private client there for 30+ years.” When he asked why, the bank told him only: “We aren’t allowed to tell you.”
Yahoo writes that he even framed the closure letter, which accused him of unspecified “concerning activity” and warned the bank “may not be able to open new accounts for you in the future.”
The incident reignited concerns that the alleged Biden-era “Operation Chokepoint 2.0” is still lurking in the background, despite Trump’s new executive order aimed at penalizing firms that debank crypto businesses. Critics online immediately connected the dots, suggesting regulators and banks are still quietly squeezing crypto-aligned companies and founders.
JPMorgan’s move sparked a broader backlash from Bitcoin advocates like Grant Cardone, Max Keiser, and others who are already furious over the bank’s perceived hostility toward Bitcoin and its recent push to delist companies with heavy BTC exposure. Many publicly closed their JPMorgan accounts, accusing the bank of targeting the crypto sector while having no trouble maintaining far more questionable clients in the past. (Apparently “concerning activity” was never a problem back when they were happily banking Epstein.)
Tether CEO Paolo Ardoino replied to Mallers that the whole ordeal is “for the best,” later adding that organizations trying to undermine Bitcoin “will fail and become dust.” Meanwhile, JPMorgan insists it’s just protecting the “security and integrity of the financial system”—a claim that might land better if the bank’s compliance radar didn’t seem to activate only when the customer is a crypto CEO rather than, say, a notorious sex-trafficking financier.
Recall just days ago we wrote that the bank is now under fire from Florida officials over its cooperation with the Biden DOJ's anti-Trump investigation known as “Arctic Frost,” - providing sensitive banking information to Biden prosecutor Jack Smith.
Also we noted US regulators are examining whether JPMorgan Chase has denied customers fair access to banking, as pressure grows over debanking decisions that were made against conservative figures, according to reporting from Financial Times and the company's 10-Q filing.
In its quarterly filing, the bank noted it was “responding to requests from government authorities and other external parties regarding, among other things, the firm’s policies and processes and the provision of services to customers and potential customers” .
JPMorgan linked the scrutiny to an August executive order from Donald Trump directing regulators to review possible “politicised or unlawful debanking” . The bank said related inquiries include “reviews, investigations and legal proceedings,” without identifying the agencies involved.
Tyler Durden
Mon, 11/24/2025 - 22:10 Close
Tue, 25 Nov 2025 02:45:00 +0000 Google Denies Claims That It's Reading Gmails To Train Its AI
Google Denies Claims That It's Reading Gmails To Train Its AI
Google Denies Claims That It's Reading Gmails To Train Its AI
Authored by Jack Phillips via The Epoch Times (emphasis ours),
Google is denying viral claims that private Gmail emails are being used to train its AI models.
An illustration of a mobile phone and laptop with the Google website, on Dec. 14, 2020. Laurie Dieffembacq/BELGA MAG/AFP via Getty Images
The announcement follows multiple reports this past week that the company has rolled out such features.
In a post issued on Nov. 21, Gmail said that it wanted to “set the record straight on recent misleading reports.” It listed several points, saying, “We have not changed anyone’s settings,” Gmail’s “smart features” have existed for years, and, “We do not use your Gmail content to train our Gemini AI model .”
“We are always transparent and clear if we make changes to our terms [and] policies,” Google said.
The claims about Google included a post from cybersecurity company MalwareBytes , about which the company later issued a correction. Separately, a post on X from a YouTube content creator received around 150,000 likes. It contained similar claims that users were automatically opted into allowing Google to use Gmail emails to train its AI models.
“We’ve updated this article after realizing we contributed to a perfect storm of misunderstanding around a recent change in the wording and placement of Gmail’s smart features,” MalwareBytes said in its correction.
“The settings themselves aren’t new, but the way Google recently rewrote and surfaced them led a lot of people (including us) to believe Gmail content might be used to train Google’s AI models, and that users were being opted in automatically.”
The company noted that “after taking a closer look at Google’s documentation and reviewing other reporting, that doesn’t appear to be the case.”
Google has maintained on several of its blogs that it would protect user privacy regarding its Gemini AI models.
“Your data stays in Workspace,” says a company policy page. “We do not use your Workspace data to train or improve the underlying generative AI and large language models that power Gemini, Search, and other systems outside of Workspace without permission .”
It adds that for some features, including “accepting or rejecting spelling suggestions, or reporting spam,” suggestions are rendered anonymous or aggregated and could be used in “new features we are currently developing, like improved prompt suggestions that help Workspace users get the best results from Gemini features.”
“These features are developed with strict privacy protections that keep users in control ,” the company says.
The smart features program for Gmail allows automated email filtering or categorization, automated composition of text in email, or suggests quick replies to emails, according to the company.
To determine whether the features are turned on or off, users can open Gmail on a desktop or mobile app and click on the gear icon before proceeding to See All Settings on desktop or Settings on mobile.
Then they can go to a section called smart features in Gmail, Chat, and Meet. To turn the features on or off, users can check or uncheck the box that says “Turn on smart features in Gmail, Chat, and Meet.”
Tyler Durden
Mon, 11/24/2025 - 21:45 Close
Tue, 25 Nov 2025 01:55:00 +0000 The Mystery Of Intuition: Where Gut Feelings Really Come From
The Mystery Of Intuition: Where Gut Feelings Really Come From
The Mystery Of Intuition: Where Gut Feelings Really Come From
Authored by Makai Allbert via The Epoch Times (emphasis ours),
We’ve all experienced intuition in some form or another. The hunch of knowing without understanding why; the sense that something is right—or terribly wrong—before conscious thought catches up. Or a simple instinct that something is off about a stranger.
Illustration by The Epoch Times, Shutterstock
Intuition goes beyond superstition, serving as a sophisticated form of intelligence operating largely beneath conscious awareness.
The phenomenon raises a question that has intrigued scientists, philosophers, and everyday decision makers: Where do gut feelings really come from?
Knowing Without Knowing How
Studies have found that when chess grandmasters are given just five seconds to evaluate a position, they can make accurate predictions despite lacking time for conscious analysis.
Due to the thousands of hours of experience under their belts, their brains can make rapid decisions through pattern recognition, without requiring deliberate thought. This experience, similarly reflected among experts across many fields—doctors, military personnel, and firefighters—points to the possibility that intuition may emerge from a rich substrate of prior experience.
Emma Seppälä, psychologist and science director at Stanford University’s Center for Compassion and Altruism Research and Education, told The Epoch Times that in these instances, intuition is “a fast, instinctive form of intelligence that operates separately from our conscious thoughts.”
Yet, this kind of intuitive, rapid processing isn’t limited to professional skills. Going with your gut may be especially true in complex situations in your own life. Research shows that when people face complex decisions, such as selecting a home or making major life choices, those who focus on their feelings rather than painstakingly analyzing every detail often make better decisions and, perhaps even more importantly, are more satisfied with the outcome.
Illustration by The Epoch Times
Kamila Malewska, who studies intuition in managerial decision-making at Poznan University of Economics and Business, believes intuition is invaluable in situations with multiple alternatives, no clear criteria, insufficient information, and unique problems without precedent.
The Biology of Gut Feelings
We often say we have a “gut feeling,” and research now shows the phrase carries both a metaphorical and biological truth.
The gut has what scientists refer to as a “second brain,” comprising more than 200 million neurons. These neurons send signals back and forth with the brain through the vagus nerve, forming the gut-brain axis. This system creates a feedback loop that affects how we feel physically and emotionally.
Illustration by The Epoch Times, Shutterstock
Moreover, the health of the gut microbiota, which comprises approximately 38 trillion bacteria, can affect feelings of urgency, emotions, and even memory, as it produces chemicals that affect the brain. In mouse experiments , tweaking the gut microbiota balance can alter brain neurochemistry, making mice more bold or anxious. Notably, in humans, approximately 90 percent of serotonin, a key neurotransmitter that influences mood and decision-making, is produced in the gut. This indicates that emotional states and intuitive feelings may be influenced by the gut-brain axis.
This connection isn’t new. The vagus nerve may have helped our predecessors find food and avoid danger through gut-based intuitive signals. Today, the gut-brain system still functions, albeit in a different manner. When you feel butterflies in your stomach before a big decision, or a sinking feeling when something seems wrong, you may be experiencing this ancient communication system at work.
Unconscious Gestalt
Besides the gut-brain axis, neuroscientists have found other brain processes that may explain intuition.
One way to understand intuition is to examine how memories form.
Don Tucker, a neuroscientist who studies consciousness and memory, explained that memory occurs before you are aware of it.
“Memory is organized from an implicit level where general meaning is not fully articulated into conscious access, but is still very powerful in providing a sense of the gist of the information, ” Tucker told The Epoch Times.
In other words, before we consciously remember or notice something, our brains, especially our limbic system, rapidly sort out experiences, picking up the important bits and giving a holistic level of understanding.
This process relates to another psychological concept called gestalt: the brain’s tendency to perceive patterns rather than individual parts, and to create closure to make sense of incomplete information.
Consider a manager interviewing a seemingly perfect candidate. Their resume seems impeccable, their answers are satisfactory, but something still feels wrong. Only later does the manager realize subtle inconsistencies in the candidate’s story, a shift in eye contact during discussions of previous employment, and a mismatch between verbal and nonverbal expressions. The cues may not have been noticed in the moment, but the brain assembled them into an intuitive warning—into an unconscious gestalt.
Neuroscience supports these ideas. The right hemisphere of the brain is good at spotting patterns and noticing things that don’t fit, even if we’re not aware of it. The hippocampus compares what we see now with past experiences, while the orbitofrontal cortex integrates emotional memories with present sensory input. The result appears as a feeling rather than a thought.
The process of unconscious becoming conscious is driven by what is called predictive processing.
Rather than passively receiving stimuli and then reacting, predictive processing theory suggests that the brain actively generates predictions about what it should perceive based on its experience. When these predictions detect a mismatch—something that does not fit the expected pattern—the result manifests as intuitive unease or “knowing.”
According to Tucker, consciousness develops from this primitive, intuitive level through a process of articulation. A vague feeling—a sense of “no, I shouldn’t do that”—gradually becomes more conscious and explicit as the brain works to understand why the feeling arose.
Could intuition also come from somewhere else?
Perhaps, instead of merely reacting to the present, intuition offers us a glimpse of the future.
Memories From the Future
In the mid-1990s, Dean Radin at the University of Nevada, Las Vegas, designed an experiment to test whether awareness could transcend time. He had participants connected to an EEG machine and placed in front of a computer screen. The computer randomly selected and displayed pleasant or disturbing images after a brief pause.
Radin noticed that people’s brains became more active just before seeing disturbing images, but not before positive ones . It was as if the brain could sense something bad was coming, even seconds before it happened. This effect was called “presentiment.”
Replicated results following Radin’s original experiment. Lower heart rate variability in response to disturbing images indicates a stronger fight-or-flight reaction. Illustration by The Epoch Times
The results were statistically significant, and other researchers , such as Daryl Bem at Cornell University, found similar effects in their own experiments .
A 2012 meta-analysis of 26 studies spanning three decades found that experiments like Radin’s and Bem’s suggest that human physiology can distinguish between randomly delivered emotional and neutral stimuli occurring one to 10 seconds in the future.?
This isn’t precognition in the traditional sense—a psychic power of seeing future events—participants aren’t consciously predicting them . Instead, their autonomic nervous systems—heart rate, skin conductance, and brain activity—show measurable arousal before encountering emotionally significant stimuli. According to the 2012 meta-analysis, the effect size may be small. Still, it’s statistically significant across multiple laboratories and researchers, with the probability of the effect being a coincidence estimated at one in a trillion. That’s the equivalent of flipping a coin and getting heads 40 times in a row.
Julia Mossbridge at Northwestern University, who led the meta-analysis, said when the study was released: “The phenomenon is anomalous, some scientists argue, because we can’t explain it using present-day understanding about how biology works.”
Read the rest here...
Tyler Durden
Mon, 11/24/2025 - 20:55 Close