By Paul Homewood
NASA have a webpage, which lists the so-called facts about climate change.
One section covers “warming oceans”:
Now, if these figures are correct, and it’s a big if, what does this imply for AGW theory?
According to NASA,
“The oceans store more heat in the uppermost 3 meters (10 feet) than the entire atmosphere (above it).”
For the sake of argument, let’s assume that the computer models are correct, and that extra GHGs should raise atmospheric temperatures by 0.2C/decade. Since 1969 this would equal about 0.9C.
However, because ocean heat content is thousands of times greater than that of the atmosphere, it also takes thousands of times more energy to raise ocean temperatures by the same amount.
Assuming that the oceans had somehow absorbed their share of AGW, we can do a simple calculation of the temperature change we should expect to see in the top 700 meters:
700 meters divided by 3 meters = 233.3
0.2C divided by 233.3 = 0.00085C
0.00085C X 4.6 decades = 0.0039C
There is a slight difference between 0.0039C and 0.302F! If oceans have warmed by 0.302C since 1969, it cannot be due to GHGs. There has to be some other explanation.
And if GHGs really have been responsible for warming the deep ocean by 0.0039C, it would be utterly impossible to measure such a microscopic amount across the world’s oceans and over several decades.
By Paul Homewood
I took a look at Bo Vinther’s SW Greenland temperature series the other day. Robin Edwards, who alerted me to it, has also written an analysis of it, and his guest post is below.
It gets a bit technical, but is worth reading through to the end:
By Paul Homewood
I had to read this twice!
From the Washington Times:
A former Greenpeace leader butted heads Tuesday with the anti-fracking movement by insisting that hydraulic fracturing is needed to help fight global warming.
Stephen Tindale, who was executive director of Greenpeace U.K. from 2000 to 2005, said that fracking, used to extract natural gas and oil from underground rock, helps combat greenhouse-gas emissions by reducing reliance on coal.
“[T]oday Britain faces its biggest environmental challenge ever — tackling global warming while still keeping the lights on,” Mr. Tindalesaid in the Tuesday article for the [U.K.] Sun. “And as a lifelong champion of the Green cause, I’m convinced that fracking is not the problem but a central part of the answer.”
He praised the British government’s recent approval of a shale-gas project in Lancashire, calling it “a great start, but that’s all it is. We need dozens more like it if Britain is to meet our energy needs in the decades to come.”
“And if activist groups including Greenpeace really want to help the environment, they should stop protesting about projects like this and let them be built as quickly as possible,” said Mr. Tindale, who now leads the environmental think-tank Climate Answers.
By Paul Homewood
GWPF report on an article in the Financial Times, which unfortunately is behind a paywall:
Oil and gas companies are valued largely on reserves that will be produced over the next 15 years, meaning that their investors are not vulnerable to longer-term changes in energy markets, a leading industry adviser has said.
Daniel Yergin of IHS Markit rejected warnings of a “carbon bubble” that could destabilise financial markets as policies to combat climate change hit fossil fuel producers, saying the transition to renewable energy would take decades and investors would have time to adjust their holdings.
The dangers for financial assets created by climate change have become an increasingly prominent issue for investors. Last year, ministers from the Group of 20 countries instructed the Financial Stability Board of their regulators and policymakers to start looking at the risks and how to address them.
Mark Carney, governor of the Bank of England who chairs the FSB, argued in a speech last year that regulators needed to address the problem now, because “once climate change becomes a defining issue for financial stability, it may already be too late”.
The action by regulators could restrict the flow of capital to oil and gas companies by making it harder for banks and other financial institutions to lend to and invest in the industry.
In a paper published on Wednesday, Mr Yergin argued that the concerns expressed by Mr Carney and others have been overdone, because investors generally look at relatively short time horizons when valuing oil and gas assets.
It is a point I have been arguing for a while. There is simply no way that renewable energy can make more than a dent in demand for fossil fuels in the next couple of decades, regardless of what might happen in fifty years time.
Stock valuations of oil companies are essentially based on their earning potential in the short to medium term, just as most companies are. Maybe they won’t be around in 2050, but exactly the same argument could be made about Apple or Google.
Earning potential in the long term is ignored in valuations, not just because it is so uncertain, but also because when discounted back to NPV it is too small to be relevant.
The real danger to the fossil fuel industry is not what might happen in 30 years time. It is that interventions from the likes of Mark Carney could make it harder to raise new capital, thus leading to reduced investment in new fields.
This in turn will inevitable lead to oil prices spiking, with all the ensuing damage to the global economy.
What I had not appreciated that Carney’s intervention arose from a decision by those ministers from G20.
By Paul Homewood
The recent South Australian blackout has triggered a debate about the manifest risks of wind farms to the security of electricity networks. National Grid’s 2016/17 Winter Outlook reinforces previous concerns that low-carbon policy mandates are resulting in electricity systems that are likely to be fragile in the face of external shock, and are therefore more difficult and consequently more expensive to manage.
The UK’s National Grid has just published its Winter Outlook for 2016/17, in which it describes the situation this winter as “tight but manageable” (Overview section, p. 14).
The margin of “derated capacity” over expected peak load is roughly 3.4 GW over about 52.7 GW expected peak load, or a margin that National Grid quotes as 6.6%. However, the constitution of this margin both undermines confidence in its resilience, and reminds us that security of supply is increasingly dearly bought in the United Kingdom.
The margin in fact is critically dependent on the 3.5 GW of contingency balancing reserves (defined on p. 14, as “additional capacity held outside the market”, meaning all sorts of odds and ends). It is interesting to note that, on page three, that National Grid observes that some units in the supplemental balancing reserve need more than one day’s notice of operation, which is not encouraging, either for reliability or for cost (plant brought in good time may become surplus to requirements, but will have to be paid in any case).
Excluding the (derated) contingency balancing reserve the margin is only 1% over a peak load of 52.7 GW, ie a margin of about 580 MW, with Loss of Load Expectation of 8.8 hours per year. This is not attractive, and shows how reliant the system has become on expensive contingency balancing reserves.
Furthermore, net interconnector imports are assumed as 2 GW. It must be questionable whether that is a safe assumption. We know from ample empirical evidence in Europe that interconnectors should not be relied upon in a tight corner, since they must reduce transit or even disconnect to protect themselves. In any case, if the market on the other side of the interconnector is also somewhat tight, they may be very little use at all. The news that the French system is likely to be itself experiencing tight capacity margins this winter, due to the safety inspections in the nuclear fleet is a reminder that this is no merely theoretical concern.
National Grid has de-rated grid connected windpower using the arguably generous Equivalent Firm Capacity of 21% (i.e. 0.21 x 10 GW = 2.1 GW). Given the overall narrowness of the margin, an error here could be critical.
Another point of concern is the fact that National Grid appears to have netted the derated capacity of embedded wind generation from the load estimate, i.e. 0.21 x 4.6 GW = approximately 1 GW. Again this could be badly wrong, and in any case, as one engineer has put it to me “This does not capture the combined probabilities of high demands and low availabilities of generation.”
In summary, the Outlook shows that the UK system is heavily dependent on costly contingency balancing reserves, the interconnectors, and on arguably optimistic assumptions about wind. As National Grid’s own summary, “tight but manageable”, suggests, it is now obvious that the UK has a fragile electricity system.
The potential consequences of such fragility, arising for similar reasons, have recently been made painfully evident in South Australia, which suffered a total system blackout on the 28th of September, with full service to all consumers only being restored completely by the 12th of October. The preliminary report and its update by the Australian Energy Market Operator (AEMO) present the current state of knowledge.
What we can infer at present is that the distal cause of the blackout was a state policy that discouraged conventional generation, and resulted in a system that was heavily dependent on wind turbines and on a single interconnector to the neighbouring state of Victoria. Experience has revealed this as a fragile system. The proximal causes of the blackout were a major storm with wind speeds in excess of those forecast. It appears that these winds caused grid damage resulting in voltage disturbances. The wind generators were not programmed to ride through such serious faults, and disconnected, resulting in the loss of 445 MW of generation, about 23% of the system load at that time (1,895 MW). This disconnection transferred the burden to the Victoria interconnector, which could not sustain that load, and itself disconnected to prevent damage, with a further loss of 900 MW of supply. The whole burden now transferred in a split second to the online conventional generation, which needless to say could not meet it and also disconnected, resulting in a black system. The entire event lasted about ninety seconds.
System operators world-wide will now be reviewing their policies on wind turbine Fault Ride Through, and AEMO itself is as a matter of urgency requiring the wind turbine operators to be programmed to provide more robust FRT, but this is more than changing the settings on a dial. Generators disconnect to prevent mechanical damage, which is a real hazard during a fault, so if higher levels of FRT are required, the wind turbines will probably have to be modified to make them less susceptible to gross physical harm in the event of major voltage disturbances. Conventional generation is, as a rule, already engineered to withstand fairly extreme faults, as witness the fact that none of the conventional generation that was online during the South Australian event disconnected until the interconnector came offline. Ensuring that the wind turbines are similarly robust may not be cheap, and there already signs that wind operators in South Australia are reluctant. The chief executive of one of the operators has been quoted Australian Financial Review that he was uncertain whether enhanced Fault Ride Through requirements would expose his equipment to damage: “It could have zero effect, it could have longer term operations and maintenance costs, it could have any number of issues”.
In fact the likelihood of it having zero effect on the wind turbines is small; these devices, already very expensive, will have almost certainly been engineered to be adequate, though not very much more than adequate to the levels of Fault Ride Through generally required. More demanding levels of FRT will thus mean that they are de facto under-engineered, with impacts on maintenance costs and reliability. Re-engineering will not be cheap, and improving the standards for new wind turbines will have a significant effect on their capital cost, pushing the hoped for independence of subsidy still further off into the future.
Read together, National Grid’s UK Winter Outlook and AEMO’s reports on the South Australian case, suggest that systems heavily exposed to wind generation tend to be fragile, and rendering such systems adequately robust is both difficult and, crucially, expensive.
By Paul Homewood
Well, that did not last long then!
The French government is set to drop plans to introduce a carbon tax, French financial daily Les Echos said on Thursday.
The newspaper, quoting several sources, said the socialist government will not include the carbon tax in a draft 2016 budget update currently being discussed.
Environment Minister Segolene Royal had said in May that France would unilaterally introduce a carbon price floor of about 30 euros ($33) a tonne with a view to kickstart broader European action to cut emissions and drive forward the December 2015 United Nations-led international climate accord.
The plan had pushed power prices higher in the spring.
Les Echos quoted a source as saying that the measure is too complicated to put in place and might be unconstitutional.
The paper said that state-owned electric utility EDF, which produces mostly carbon-free nuclear power, was in favor of the measure, but that gas utility Engie SA had lobbied against the tax because it would make its gas-fired power plants less competitive than similar plants in neighboring countries.
A source close to the French government told Reuters that nothing had been decided yet on the carbon tax but confirmed there were doubts about it.
"In the current context, it is difficult, due to concerns about employment, legal difficulties and security of supply," the source said.
French power prices have spiked higher in recent weeks as a series of unplanned nuclear reactor closures have led to worries about security of supply.
The government is due to receive a report about the carbon tax in coming days and will decide on it mid-November, the source said.
Meanwhile the UK govt is still committed to a Carbon Price Support, a carbon tax in effect, of £18/tonne CO2, on top of the EU traded price for carbon.
By Paul Homewood
Wind power could supply up to 20 per cent of the world’s power needs by 2030, according to new analysis.
In its latest Wind Energy Outlook, released this week, the Global Wind Energy Council (GWEC) predicted that worldwide wind capacity could reach 2110 GW by 2030, with annual investment in the sector growing to €200bn ($220bn). By 2050, under GWEC’s best-case scenario, global installations could reach 5800 GW.
The world’s wind power installations totalled 433 GW at the end of 2015, with a record number of new projects amounting to 63 GW and representing a 17 per cent increase on 2014, the report found. China maintained its lead with capacity additions of 30.8 GW and an installed base of 145 GW, followed by the US with 74 GW, Germany with 45 GW, India at 25 GW, Spain with 23 GW and the UK with 13 GW. Also in 2015, Brazil entered the 10 GW+ bracket for the first time.
And the industry is set to grow by around 60 GW in 2016, GWEC predicted. However, challenges still remain for many regional markets. In Europe, where there are 148 GW installed, a cocktail of policy changes, economic crises and austerity measures is viewed as likely to produce a “difficult” year ahead, although a shift in investment away from fossil fuel-fired power could provide a boost, GWEC noted. In Asia, curtailment due to transmission bottlenecks remains a significant problem in the largest market, China, and a slowdown could be seen in 2017. Continuing growth is predicted for North America, where the US is experiencing a ‘wind rush’ due to unwonted policy stability.
“With new markets developing rapidly across Africa, Asia and Latin America; unprecedented policy stability in the US market; strong and continued commitment from India and China; and the rapidly dropping prices for wind power both on and offshore – on the whole things look very good for the industry,” GWEC said, adding that “but of course much could go wrong…history rarely follows the smooth curves in this and other reports. But at least now the direction of travel is the clearest it has ever been.”
The full report is available here.
We need to remember that the GWEC exists to promote the interests of wind farm operators, so we need to take this with a large pinch of salt. And as they say themselves, but of course much could go wrong…history rarely follows the smooth curves in this and other reports.
The target of 2110 GW is actually derived from their Advanced Scenario, the most optimistic of four. It would imply capacity of 112 GW added each year up to 2030, nearly double that added in 2015.
Their base case, New Policies Scenario, is much more realistic, already taking account of Paris and other commitments made by governments. Under this, wind capacity would be much lower by 2030, at 1259 GW.
But even assuming that the higher projection is correct, there is still another problem. The claim that wind could supply 20% of the world’s power is based on CURRENT electricity consumption.
It is widely accepted that electricity demand will increase sharply in years to come. The BP Energy Outlook reckons that demand will increase by 43% by 2035.
If we assume a figure of 40% for 2030, the claimed 20% share actually drops to 14%. The more realistic New Policies Scenario only amounts to 8%, compared to a current level of 3%.
And, of course, there is one more fly in the ointment – electricity only accounts for a fraction of total energy consumption. According to BP, the share of power generation will be 45%.
In other words, wind power will be unlikely to supply more than 4% of global energy consumption even by 2030.
And the cost for this paltry contribution? $220 billion a year, or $3.3 trillion in total.
By Paul Homewood
Dellers writes for Breitbart:
Alarmist scientists are trying to cover up the good news that rising CO2 levels are making the planet turn greener. And that even includes one of the scientists who made the discovery in the first place.
By Paul Homewood
h/t Green Sand
From the Telegraph:
National Grid is to be guaranteed a minimum of £1.3bn income for building the world’s longest subsea power cable to import electricity from Norway.
Energy regulator Ofgem on Tuesday announced plans for consumers to guarantee the utility giant at least £53m annual revenues for 25 years in return for its 50pc investment in the £1.4bn North Sea Link interconnector.
The 450-mile cable from Blyth in Northumberland to Kvilldal in Norway is due to be built by 2021 and will be the first electricity link between the two countries.
It will be able to import or export up to 1.4 gigawatts (GW) of electricity – enough to power about three quarters of a million UK homes.
Interconnector owners make their income by selling companies access to their cable to trade power.
However, Ofgem is keen to encourage investment in the cables and has introduced a "cap and floor" regime guaranteeing developers a minimum income, backed up by consumer subsidies if needed, while also limiting their total revenues.
Ofgem said the minimum £1.3bn revenue that National Grid would be guaranteed by consumers compared with estimated benefits to consumers of £3.5bn by accessing cheaper power from Norway.
The proposed cap has been set at £94m a year, or £2.3bn over the 25 year support regime. Beyond that, any revenues would be paid back to consumers.
The “cap and floor” guarantee covers revenues from selling of capacity and any payments received under the capacity market mechanism. Not included is revenue from actual electricity sold.
The cost of the arrangement is passed onto consumers, and is indexed to RPI.
Ofgem have based the guarantee on capacity utilisation of 93%, which therefore would yield 11.4 TWh annually. With a range of £53 to £94 million, this would yield a cost of £4.65 to £8.24 / MWh, on top of the electricity purchased from Norway.
It remains to be seen what price the latter will be.
By Paul Homewood
From The New American:
Expressing nostalgia for the days when just three establishment-controlled propaganda organs dominated the public narrative, President Obama lashed out at what he called the “wild, wild west” media landscape that allows non-establishment voices and viewpoints to be heard. Claiming that “censorship” would not be the answer, Obama called for Americans to submit to a vaguely defined (presumably government-run) “curating function” that would help “discard” unapproved information. Critics, though, warned that an increasingly desperate establishment was plotting all-out war on freedom of the press and the free Internet.
Speaking at an innovation conference in Pittsburgh last week, Obama called for what analysts said sounded like a government crackdown on free speech and online freedom. “We are going to have to rebuild within this wild-wild-west-of-information flow some sort of curating function that people agree to,” he argued, expressing concerns about “conspiracy theorists” and skeptics of the man-made global-warming theory having a platform. Other senior Obama officials, including former “regulatory czar” Cass Sunstein, have even proposed a government “ban” on conspiracy theorizing.
Praising the days when just three TV channels dominated the news and people still “generally” trusted their propaganda, Obama, implying Americans are too dumb to sort truth from lies without government help, said something had to be done about the free flow of information. “There has to be, I think, some sort of way in which we can sort through information that passes some basic truthiness tests and those that we have to discard, because they just don’t have any basis in anything that’s actually happening in the world,” Obama claimed. Apparently he was not referring to the establishment media-created alternate reality that is increasingly falling on deaf ears. Ironically, though, Obama has openly boasted about lying to the American people.
While short on details and how his scheming would square with the Constitution’s First Amendment, Obama’s comments sounded suspiciously totalitarian. “That is hard to do, but I think it’s going to be necessary, it’s going to be possible,” he said at the summit, without giving details on just how it would be done, much less how it would be done constitutionally. “The answer is obviously not censorship, but it’s creating places where people can say ‘this is reliable’ and I’m still able to argue safely about facts and what we should do about it.”
Some have speculated that Obama wants to create a government-run media organ, though that would probably have even less credibility than the government or the establishment media do now. Other analysts suggested that, despite the denial, censorship is exactly what Obama and the establishment have in mind. In the European Union, it is already well underway, with help from America’s biggest technology companies. Essentially, it sounded like Obama was calling for the establishment to set the acceptable parameters of debate, and then tolerate any views within the establishment-approved box while silencing any others.
“The way I would like to see us operate is, yes, significant debate and contentious debate, but where we are operating on the same basic platform, same basic rules, on how do we determine what’s true and what’s not,” Obama continued. “Everything on the internet looks like it might be true. And so in this political season, we’ve seen — you just say stuff. And so everything suddenly becomes contested. That I do not think is good for democracy, and it’s certainly not good for science, for progress, for government, for fixing systems.” Of course, America is a republic, but that is besides the point.
Read the full article here.