Tuesday, April 1, 2014

Post # 93: My Take On The Climate Change Debate

I am not a climate scientist.  I'm a nuclear engineer.  But having spent over thirty years at Oak Ridge National Laboratory (ORNL), I know good science and good R&D technique when I see it.  I'm frequently asked my views on the global climate change debate.  During my years at ORNL, I "rubbed elbows" with a number of outstanding scientists involved in climate research and simulation.  I approach the issue from the perspective of someone schooled and experienced in the application of the Scientific Method, and one who is intimately familiar with the challenges of understanding and simulating large, complex systems.

So, with some reservations, I'm going to (finally) share my views here. In a Question & Answer interview format.

QUESTION 1:  Do I believe the climate is changing?

ANSWER:  Of course!  During the not-too-distance past, the "temperature" of the earth has been both much hotter and much colder than it is now and has been during my lifetime.  Most of the wildest swings in temperature predate significant human populations and the industrial revolution. Witness:

History of Earth's Temperature (Ref: Glen Fergus @ http://commons.wikimedia.org/wiki/File:All_palaeotemps_svg.svg)

QUESTION 2:  Do I believe humans and human activities are a major driver of climate change (i.e. do I believe in Anthropogenic Warming"?

ANSWER: I'm agnostic on this issue.  I've examined a robust sample of the available scientific information on anthropogenic warming. When examined objectively and in the context of issues I'll note below, it simply isn't conclusive.  I'm NOT saying we humans aren't driving climate change.  We might be.  But the available evidence, viewed in context, simply isn't compelling (at least to me and many technical professionals like me).

Given the emotional charge surrounding this issue, I do feel compelled to offer a bit of my reasoning with regard to why I'm agnostic about Anthropogenic Climate Change.  My reasoning, as simply as I can compress it here comes down to:

1.  HISTORY: As noted above, the Earth's climate has been both much hotter and much colder than it now is – and these swings obviously had nothing to do with human activities.  Therefore, it is reasonable to believe the earth's temperature should continue to vary with time.

2.  KNOWLEDGE: The phenomena and mechanisms determining the Earth's climate are extraordinarily complex, and our understanding of many of these mechanisms is rudimentary at best.  The various phenomena are coupled in extremely complex ways.  Viewed from an engineering perspective, the system contains both "positive" and "negative" feedback effects, and both linear and non-linear phenomenon.  It is neither a closed or an open system, but some hybrid of the classical definition of these systems.  I spent much of my career simulating extraordinarily complex nuclear reactors and severe accidents in nuclear reactors.  That's kid's play compared to the challenge of simulating the complexity of the earth's biosphere and it's climate.

3.  MODELS: Our climate change models simply "aren't there yet".  We cannot yet accurately predict climatic temperature changes for one-two decades – much less a century or more.  (Heck, we can't accurately predict the temperature in East Tennessee a few days in advance.)  As evidence, I'll simply point out that only a couple of some ninety major climate change models used by the global climate simulation community accurately predicted the "pause" in climatic temperature escalation we've witnessed during the past fifteen years or so.  You can easily overwhelm yourself with articles about this development by "googling" "climate models" and "pause".  Here's a compendium of predictions assembled by Roy Spencer (with whom I have no affiliation):

Compilation of Global Climate Model Predictions vs. Observed Data (Ref. DrRoySpencer.com)

The errors between the predictions and the actual observed global temperatures have grown over the past ten years of so.  (Some in the climate modeling community have attempted to explain away the poor correlation between the predictions and the observations by citing any number of unexpected natural phenomena that were responsible for the differences.  But doesn''t that actually support my point? I know from decades of complex simulation work, that the best indicator of one's understanding of a phenomena is one's ability to predict the future behavior of that phenomenon.  Judging by that standard, we have a long way to go in climatic modeling.  The problem is no-doubt some combination of missing physics and phenomenon, physics for phenomena that are modeled incorrectly, missing or incorrect feedback loops, and spatial and temporal smoothing/averaging schemes.   George Box said, "all models are wrong but some are useful".  Sherrell Greene says, "... and the only way to know which models are useful is to get the data."  (Oh... and one other thing I learned during my simulation career is that often the most important thing one gains from a simulation isn't the answer, but rather the ability to ask more intelligent questions.)  This leads to the next issue...

4.  DATA: Our climate data sets aren't yet sophisticated enough to validate the models.  It is virtually impossible to validate climatic models in the classical engineering sense of the term.  The problem has to do both with the specific parameters (variables) the models predict (spatially and time-average variables) and the limitations of the parameters we can actually measure and the data we can actually collect.   Put simply, due to the enormous geospacial and temporal data averaging/smoothing required in the simulation models, it's extraordinarily difficult to define a data collection paradigm that accurately samples the actual parameters the models are calculating.  After all, what is "the Earth's average temperature"?  This challenge is not unlike having a model that predicts the "average heart rate" of an American.  Exactly what data does one collect (and where and when does one collect it), to obtain a suitable data set for validation of the model's predictions?  And once we have the "average heart rate", how do we interpret and use the information?  When you can't actually measure what you are predicting, you are forced to synthesize values for the predicted parameters from parameters you can measure.  This "data synthesis" problem as been the source of countless pains and sorrows in the simulation business since the inception of computer simulation.  Data measurement uncertainty, instrument bias, spatial averaging, time averaging, data interpolation, and data extrapolation of actual measured values can be the devils' workshop (wittingly or unwittingly).  One simple case in point:  Steve Goreham, the Executive Director of the Climate Science Coalition of America (with whom I have no affiliation) has shared two posts (here and here) that articulate the common-sense concerns many have today with the prevailing "scientific community" (whatever and whomever that is) view on global warming...

5.  ORGANIC "PRESSURE" IN THE CLIMATE RESEARCH ENTERPRISE.  There are many, many fine scientists conducting climate change research. They are professionals of the highest skill and integrity.  (I believe the vast majority of researchers fall into this camp.)  However, any "society" produces more of what it rewards.  In the scientific research community, one of the most important metrics of professional success is the level of research funding one secures and sustains.  And in this, the squeaky wheel does usually get the grease.  Speaking as one who spent over thirty years in the federal research complex, I know it is far easier to attract and sustain research funding to attack an imminent crisis, than it is to attack a slowly evolving issue with uncertain consequences.  The result of this reality is that the "organic" pressures (often subliminal) within the international climate research enterprise will naturally tend to promote an atmosphere of doom and gloom.  It is simply a fact that many of those in the scientific community who most loudly trumpet the scourge of man-made climate warming are the one's whose careers depend on the flow of national and international dollars into climate change research.  (Cautionary Note: this doesn't, by the way, mean  the doom and gloomers are wrong – just that one should maintain an healthy scientific skepticism about the entire matter.)  This relates to my final issue...

6.  DOGMA vs. THE SCIENTIFIC METHOD.  Finally, I'm extremely concerned about the defensive and unprofessional attitude some in the climatic research community take with regard to those who question the status quo or their definition of the scientific community's "consensus opinion" on climate change.  (The emails revealed via the highly-publicized and unethical hack into the emails of Climate Research Unit at the University of East Anglia in 2009 spotlighted this type of behavior.)  Be careful when the first response to questions or criticism by anyone claiming to represent the scientific community is to disrespect, disparage, and otherwise question the intelligence or honesty of the one posing the question.  This is a sure sign the "expert" has abandoned the Scientific Method in favor of his or her adopted dogma.

QUESTION 3:  What should we do about climate change?

ANSWER:  First, we should continue our climate simulation and climatic data collection activities.  Simulation models are the ultimate laboratory for integrating our knowledge and testing hypotheses – but only when the correct data is available for validation of the models.

Beyond that, the answer to this question really deconvolves into a series of other questions:
  • How credible are are current long-term climatic predictions (and in particular, are they sufficiently credible to inform and/or drive national and international policy decisions)?
  • Presuming current global warming predictions are credible, what are the implications of these predictions for humans & the biosphere (barring a change of course)?
  • What can we really do about the factors that may be driving climate change?
  • What are the negative and beneficial impacts of global climate change, and WHO/WHERE are the "winners and losers" (and there are both) if the dire global climate change predictions are true?
  • What are the cost/benefit parameters for identifiable mitigative actions?
At this point, I'm prepared to say that pumping more carbon dioxide into the atmosphere isn't a good idea.  I just don't know how bad the consequences of doing so actually are in light of all the other uncertainties, unknowns, and known factors impacting global climate change.  So it's almost impossible to quantify the "cost/benefit" ratio of various proposed climate change mitigation actions.  Thus I'm skeptical about the wisdom of extremely costly mitigative actions. 

Frankly, I'm also skeptical we can do much, on a global scale, to reduce net green house gas emissions over the next few decades.  We should extract every reasonable benefit from new behaviors and new technologies.  But, we must stay grounded in "the possible" rather than in a dreamworld that will never be.  Fossil fuels are king and will remain so (globally) for many decades).  Clean coal technology isn't here yet.  So the most important question may well be:
  • How can we best adapt to expected climate change scenarios?
I'm encouraged by signs that the dialog is beginning to shift to this question (see: today's post by Uri Friedman and Narula at theAtlantic.com ).

Well, this post became much longer than I had planned.  To sum it all up,
  • Yes, the climate is changing, 
  • It isn't clear humans are the major contributors to the change, and
  • I feel our time and treasure is best spent seeking realistic strategies to adapt to the most probable climate change scenarios, rather than pursing unrealistic and costly schemes that have little real chance of reducing net global green house gas emissions to the levels many in the climate change community feel are required to halt global warming.
Above all, respect the Scientific Method.  It keeps us honest.

Just Thinking,

Thursday, March 13, 2014

Post # 92: Grid Vulnerability and the Prepper Next Door

There's a disturbing article in today's Wall Street Journal discussing the results of a tightly-held study completed last year by the Federal Energy Regulatory Commission (FERC).  According to the Journal, the study, which was focused on U.S. electric grid vulnerability, concluded that a coordinated attack on as few as nine of the country's 55,000 substations could bring down the entire U.S. grid (Western, Texas, and Eastern).  Not only that, but the study apparently concluded the grid would likely stay down for eighteen months or longer! 

All of this, of course, in the wake of last year's "wake up call" attack on PG&E's Metcalf, CA substation.  During that attack, vandals cut underground phone lines, and fired over 100 rounds into the substation over about an hour, destroying or disabling some seventeen large transformers.

Today WSJ article immediately raised three related trains of thought in my mind.

First, the threat of physical and cyber terrorism must obviously be seriously considered – along with other threats such weather and seismic-related phenomena, electromagnetic pulse (natural and man-made), and solar storms such as the 1859 "Carrington Event", which reportedly not only crashed continental telegraph systems, but actually set telegraph poles on fire in New England.  (There is data from Greenland ice cores that suggests solar storms as large as the Carrington Even can be expected every 500 years, and storms 20% this size are to be expected every few decades.)  It's hard to imagine the destruction such a storm could cause to our modern electrical and telecommunications grids.

Secondly, the urgency with which we must proceed to strengthen, modernize, and protect our electrical grid is increasing every day.  From my perspective, the vulnerability of the grid to physical (not cyber) attack isn't really that different than it was decades ago.  What appears to have changed is (a) the fact there are those out there who actually would seek to attack the grid, and (b) our society is so much more dependent on the grid than we were several decades ago.  And of course the cyber vulnerability (not my area of expertise) is a new vulnerability driven by the ever-expanding integration of digital and network technologies into the grid.  The big question, of course is, exactly how does a society proceed to effectively protect such a vital, fragile, exposed, and accessable infrastructure?

Finally, I was reminded of a "Doomsday Prepper" episode I saw some months ago.  (Full Disclosure: I'm not a real prepper, but I do feel it's only prudent to at least follow FEMA's recommendations to prepared for the occasional several-hour to a few days or so of power/water outage.)  I recall that episode focused on how society and individuals could prepare for and cope with (1) a global health pandemic, (2) and asteroid impact, and (3) an alien invasion.  The experts interviewed for the first and second segments were credible and thoughtful individuals.  Unfortunately, the third segment seriously eroded the credibility of the entire program.

Many, with no small amount of justification, discount the entire "Prepper Movement" as a fringe, bizarre, and irrelevant community.  In today's world of cyber attacks and physical terrorism, and with our improved understanding of the expected frequency of natural disasters (such as pandemics, solar storms, asteroid impacts, etc.) a good case can me made that it's unreasonable to live as if these risks don't exist.  These risks are real, but most of us simply choose to act as if they don't exist, or to fatalistically resign ourselves to being a victim if they should occur.

So... given the results of the new FERC grid vulnerability study, you might just want to cozy up to that "Prepper Next Door", buy a generator and a lot of fuel, install some solar panels, or ... ???

Really... how would you cope with an 18 month power outage?

And oh by the way, where did I store that flashlight and those jugs of water?

Just Thinking,

Saturday, February 22, 2014

Post # 91: Nuclear Power – Out With The Old & In With The New?

Some old things we call "masterpieces".
Some old things we call "vintage".
Some old things we call "antiques".
Some old things we call "classics".
Some old things we call "quaint".
And some old things we call "obsolete".

What do we call old nuclear power plants?

There's been an interesting discussion thread going recently over on my colleague Rod Adam's Atomic Insights Blog regarding decommissioning of commercial nuclear power plants (thanks to Joel Riddle for alerting me to the thread)...  Much of the dialog there expresses the angst of the pro-nuclear community concerning the collective impact of
  • shutdowns of "perfectly good" commercial nuclear power plants that are not profitable
  • shutdowns of plants that require major investments to continue to operate
  • shutdowns of plants that simply are "worn out"
  • the aggressive pursuit of decommissioning business by nuclear reactor vendors
  • the conversion of nuclear power plant sites to non-nuclear generation uses
I want to offer some semi-random thoughts on the subject here, as this post is really too lengthy to fit nicely into a comment on Rod's blog...


In my view, it is reasonable, proper, and to be expected that our current nuclear plant vendors would aggressively pursue nuclear plant decommission business.  Who better to do it?  Hurray for Westinghouse!  If it needs to be done, I want the guys doing it who know the technology.  There seems to be some subliminal fear in some quarters that success in the decommissioning business will steal the hearts of the reactor vendors.  Personally, I don't worry about that.  It's a business.


Our current fleet of Gen-II nuclear power plants were simply not designed and constructed to accommodate major plant component replacements, upgrades, and improvements.  The idea back in the 1960s and early 1970s was that we were entering into a golden era of commercial nuclear power. The "status quo" fleet would be continually evolving to newer and better technologies and plant designs.  The nuclear power enterprise would continually renew itself.  Plants would run for 40 years (a commerce-based decision – not a technical limitation) and then be replaced with something much better.  That didn't happen for a variety of reasons.

The financial woes of the current nuclear fleet are primarily a function of two things:
  • "too-cheap-to-meter" natural gas 
  • electricity market deregulation
The first factor (cheap natural gas) appears here to stay for at least a decade or two.  It's hurting renewables (or would be if they were not so heavily subsidized) and it's gut-punching nuclear power.  From a business perspective, who wants to fight the nuclear battle when it's comparatively quick, easy, and cheap to go with natural gas and rake in the profits with practically no tangible downside?  The second factor (deregulation) has put a real squeeze on merchant plants – who are finding it increasing difficult to sell their power to customers who have an option to purchase cheap gas-generated power.  (This pressure is only going to increase in the foreseeable future.)


I am not among those who are sanguine that continued nuclear power development elsewhere in the world (outside of the U.S.) will save the nuclear power option.  (I wish it were true.  I just don't believe it is.)  From my perspective, nuclear power plant vendors are continuing to apply the development paradigm that has failed inside the U.S. to international markets.  (Any size you want as long as it huge.  Any cost you want as long as it's huge...) Very soon, the international market will start to behave more and more like the U.S. market.  Factors such as plant cost, plant size, operating complexity, etc., will become at least as large an obstacle to expanded nuclear power deployment elsewhere as they have become here. What is needed is a fundamental change in the way nuclear power is deployed.  With all due respect, China (not withstanding its recent advanced reactor aspirations) and South Korea cannot succeed in creating a new future for nuclear power by pursuing a worn-out deployment paradigm.


I believe the "magic recipe" for the eventual re-emergence of commercial nuclear power has several elements:
  • Continued safe operation of the current commercial fleet –  all bets are off if we sustain another "Fukushima-like" accident.  Continued accident-free operation is a prerequisite for a nuclear revival.
  • Financial – the capital cost of nuclear nuclear power plant options has to come down radically, or a radical new model for power plant financing must evolve.  I'm not a financial guru, and have no magic answers, but I know that $10B market cap companies aren't going to purchase $6B assets solo.  It isn't really reasonable in free-market economies for us to expect a technology to prosper if it's entry and incremental capital cost is so large it can only be afforded by 10% of the prospective customers for the technology.
  • Choice in plant sizesSmall Modular Reactors are essential to match diverse grid sizes, variable demand growth, and generating company budgets.
  • Longer Plant Lifetimes – I originally shared some thoughts about my concept of "Centurion Reactors" back in 2009 here on this blog... Those thoughts were based on some initial thinking I had done a few years before with Dr. Alvin Weinberg here in Oak Ridge and a paper I presented on the topic at the 2009 Winter American Nuclear Society Meeting.  The inter-generational benefits of plants with 100-yr lifetimes is immense... but there are serious challenges and conflicts to confront (I'll post more on this in the future).
  • Wise management of nuclear sitesCertified nuclear generation sites are a great resource and a terrible thing to waste.  This is a growing issue in the U.S. and Europe.  As current generation nuclear plants are shutdown, we need to maintain the ability to repopulate current nuclear plant sites with newer nuclear capacity.  This is a particularly thorny challenge because (a)  generating companies need the site to generate revenue, and (b) creeping development and population growth around existing sites will make it ever-more difficult to maintain nuclear capacity at some sites.  And then there's the question of new nuclear sites – can we make them both "grid accessible" and locate them where they are immune to future population growth around the plant?  (This is a major threat to the viability of the Centurion Reactor concept...)
  • More efficient licensing & regulation of nuclear power plants – plants that may look very different than our present Gen-II fleet.


We have to face our personal demons and inconsistencies as a pro-nuclear community.

Many in the pro-nuclear community espouse fiercely free-market / low regulation philosophies while, at the same time, advocating what amounts to a strong top-down federal direction of energy policy.  Businesses are in business to make money for their owners by providing value to their customers – not to serve as a instrument of national policy.  Public utilities are monopolies who exist to serve basic societal needs in a manner that does not compete inappropriately with the private sector. (Though many would argue this model has been obliterated by the cable TV business – but that's another story...)
  • Is electrical energy (and nuclear power in particular) so strategic in terms of our national interest that it should be nationalized (whatever that means)?  Most us us would answer "No!" to that question.
  • How can a free-market drive us or evolve to an "optimal solution" (whatever that means) with a traditional "one size fits all" product placement strategy?

After all... Why is there no nuclear power equivalent to Moore's Law ?  Really.

Just Thinking...

Tuesday, January 14, 2014

Post # 90: Nuclear Power, Natural Gas, Lemons, and Lemonade

Everyone reading this blog knows I'm a strong advocate of nuclear power.  I've spent much of my career in the commercial nuclear power safety and advanced reactor concept development arenas. But I like to think I'm a realistic and honest advocate.  Thus the following thoughts...


Readers of this blog are aware the technology of fracking has unleashed hitherto unrecoverable reserves of natural gas and petroleum in the U.S.  Barring any unforeseen complications, it appears two of the most significant impacts of the attractive price and availability of these new-found fossil resources in the U.S. will be:
  1. the greenhouse gas emissions footprint of electricity production in the U.S. will be significantly reduced on a "per MWhr" basis (that's good); and
  2. the sense of urgency and support for development of new non-fossil electricity production technologies will be reduced (that's bad, because if results in over-dependence on a single energy source). 
The U.S., for all its strengths, has a lack-luster record of innovation during periods in which two conditions exist:

     (A) there is no imminent threat to our lives and livelihoods, and

     (B) a low-risk, financially-attractive option exists to meet our immediate needs.

Thankfully, there appears to be no "A" on the horizon, and fracked natural gas wonderfully fits the condition "B".

I've blogged before (November 2011, Post # 57: Energy Technology: The Innovation Challenge)  about the embarrassingly-low rate of innovation in the nuclear energy business and the reasons for it.  As I said then,

"The environment in today’s nuclear energy enterprise is hostile to innovation.  Not by intent, but in reality nevertheless.  The industry is highly regulated.  It is very costly to do research, development, and demonstration.   It’s a very capital-intensive business.  The barriers to entry are incredibly high.  The down-side risks of innovation are more easily rendered in practical terms than the upside gains.  Often it seems everyone in the enterprise (federal and private sectors) are so risk-averse that innovation is the last thing on anyone’s mind.  In this environment, “good-enough” is the enemy of “better”.  Humans learn by failing.  It’s the way we learn to walk, talk, and ride a bicycle.  Our environment today has little tolerance for failures at any level.  There’s no room for Thomas Edison’s approach to innovation in today’s world.  On top of all of this, or perhaps because of it, the nuclear industry invests less on R&D, as a percentage of gross revenues, than practically every other major industry you might name."

This reality, in combination with the absence of an imminent threat or external forcing function, and in the presence of an abundant and "cheap" supply of natural gas; leads me to conclude:

the "U.S. nuclear renaissance" so longed-for by those in the nuclear power community is dead – for the foreseeable future. 

Stated differently, we seem destined to see, at best, only a handful of large commercial nuclear power plants, and a few evolutionary small modular light water reactor (SMR) power plants constructed in the U.S. over the next twenty years...

That's the "Lemon"This is the "glass half-empty" view.


You've heard the old adage, "When life gives you lemons, make lemonade..." ?  So, what's the "Lemonade"?

The era of cheap, abundant natural gas will eventually come to an end.  What then? What arrows will we have in our "energy quiver" to replace it?

  • no major commercial nuclear accidents occur that impact public health and the environment;
  • the Vogtle and Summer construction projects are successful;
  • the world-wide deployment of current and near-term nuclear power plant technologies continue; and
  • someone(s) actually deploy evolutionary Small Modular Light Water Reactors...
nuclear power will remain an important element of the energy generation mix in the U.S. for the foreseeable future.  Thus, nuclear power will have an opportunity to win its way back to the deployment table when conditions change if suitable technology is available at that time.

The question, then, is "What will/should that future nuclear energy option be?"  Can we do it better - far better – that we've done it to date?

Thanks to fracking and natural gas, we now have the luxury of considering different approaches to nuclear energy.  It appears we will have at least a few decades to ponder that question and to develop the answer.

This grace period to incubate and develop improved nuclear energy options is the "Lemonade".  This is the "glass half-full" perspective.


So, what are the functional attributes of my imagined future "Generation Phoenix" nuclear power plants?  

F. J. Bertuch (1747-1822)
I suggest nine attributes that combine to provide a starting point for those who wish to tackle the grand challenge of reverse-engineering Generation Phoenix nuclear energy system concepts for the latter half of the 21st century:
  1. SAFETY/RISK: the plants should be much "safer" (measured in terms of public health risk, investment risk, and environmental risk) than today's plants.  The risk of an accident that would result in major land contamination and long-term relocation of surrounding human populations, or major investment loss in the plant, should be significantly lower than that presented by today's plants.
  2. CAPITAL & OPERATING COST: the plants must be affordable and, yes, even attractively priced in terms both of their capital and their operating costs.  This implies an attractive cost of electricity and process heat delivered to the customer.
  3. SIZE: the technology should be scalable. The plants should be available in sizes appropriate to meet the needs of diverse deployment strategies;
  4. LOAD FOLLOWING CAPABILITY: the plants should have the robust load-following capabilities required to meet dynamic, mixed-generation electrical grids (i.e. grids with significant wind/solar generation components);
  5. DUAL USE: the plants should operate at sufficiently high temperatures to supply the process heat requirements of the (then) current major industries required high-temperature process heat;
  6. RELIABILITY: the plants must be at least as reliable as today's fleet of commercial light water reactors – preferably even more reliable;
  7. PLANT LIFETIME: the plants should have a design lifetime of at least 100 years.  They should be designed in such a way that major components can be replaced easily.  (I coined the term "Centurion Reactors" a few years ago to describe such reactors.)
  8. WASTE: the plants must have a radioactive waste management approach that society (not simply the industry) embraces as acceptable and sustainable;
  9. PROLIFERATION: the plant designs and their operating strategies, when combined with (then) extent nuclear proliferation protocols, must not present an unacceptable nuclear proliferation threat.
So there you have it.  My nine performance criteria / functional requirements for future Generation Phoenix nuclear power plants in the "post fracking" or "post-natural-gas" era...


Just Thinking...

Sunday, December 29, 2013

Post # 89: Science and the Sandbox

I'm reading an interesting book that deals with the subject of how science gets done, and how it is converted to societal impact (one of my favorite subjects) – "The Idea Factory: Bell Labs and the Great Age of Innovation".  I've also found one can gain interesting (sometime provocative) insights on the same subject from the Nobel Banquet Speeches of newly-donned Nobel Prize winners.  This week I read Dr. Randy W. Schekman's Dec. 10 Nobel Banquet Speech. Dr. Schekman is a co-winner of this year's Nobel Prize in Physiology or Medicine.  He delivered a short but thought-provoking speech on the role of government in "managing" science...
Schekman quotes from Vannevar Bush's (Bush was the science adviser to Presidents Roosevelt and Truman) 1945 report, "Science: Endless Frontiers":

"Scientific progress on a broad front results from the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown ... Freedom of inquiry must be preserved under any plan for government support of science."

Schekman then goes on to lament the modern tendency for governments to meddle with scientists' exercise of their curiosity and talents:

"... And yet we find a growing tendency for government to want to manage discovery with expansive so-called strategic science initiatives at the expense of the individual creative exercise we celebrate today. Louis Pasteur recognized this tension long before the trend towards managed science. He wrote, "There does not exist a category of science to which one can give the name applied science. There are sciences and the application of science, bound together as the fruit of the tree which bears it".

With all due respect to Schekman, one is left with the impression he believes the more appropriate role of government is simply to spread funding around to a "Priesthood of Scientists".  The Priesthood, snug in their laboratory sanctuaries, and safe from the buffeting of current human and environmental realities, would deliver a continuing cornucopia of discoveries that would somehow solve society's and the planet's most pressing needs.  I guess, in Schekman's mathematics:

Funding + Faith – Oversight  = Useful Solutions


Schekman continues by citing Louis Pasteur as an example of someone who recognized the evils of "managed science".  While it is certainly true one can identify a plethora of examples in which the results of basic research yield unexpected impacts, the elapsed time between the research and the impact vary wildly.  (Brings to mind the old story about the blind hog underneath the acorn tree...)

But a  different view was offered back in 1997 by Donald Stokes, in his book, "Pasteur's Quadrant – Basic Science and Technological Innovation".  Stokes was himself, no lightweight.  For eighteen years he was the Dean of Princeton University's Woodrow Wilson School of Public and International Affairs.  Among other things, he was a fellow of the American Academy of Arts and Sciences, the National Academy of Public Administration, and the American Association for the Advancement of Science. 

Contrary, to Schekman's view, Stokes, whose analysis of the interplay between unbridled scientific research and federal public policy spans the period from the late 1800s through the late 1990s, concludes that federal support should be focused on "use-inspired basic research" – research that is related to and focused on delivery of impact and results relevant to today's pressing challenges. (Italicized words are mine, not Stokes'.  Read his book for the details...)

There are of course, different perspectives on the relationship between "discovery science" and "applied research" and their justifications based on "delivered solutions" and "societal impact".  And then there's the question of the appropriate roles of the public vs. private sector,  and the individual  (or "lone wolf" ) researcher vs. large research organizations.  All of this, and much more can and should influence public policy relative to and federal funding of the scientific enterprise. 

With all due respect to Dr. Schekman,  I lean heavily toward Stokes' view.  From my vantage point scientific research (especially in the U.S.) suffers from multiple unhealthy realities and dissonant voices:
  1. A weakening of society's belief in absolute truth and the value of seeking it.  The unavoidable result of the weakening of belief in absolute truth is a devaluation of the search for it – a reduction of support for research and pursuit of knowledge.  Think about it...
  2. An entitlement mentality on the part of many in the scientific research business.  Schekman's comments (to me) hint of this attitude.  You can see it manifested in many quarters.  One that comes to mind is the aggressive position taken by SOME in the global climate change research community that we should pour enormous amounts of funding into the research agenda of the global climate change community without regard to requirements for true verification and validation of methods and models against real-world data (but that's a subject for a future blog.)
  3. Shrinking federal "discretionary" budgets.  Scientific research comes after paying the federal debt, entitlement programs, and national defense. (Who can argue with that?)
  4. A distortion of  Stokes' definition of "use-inspired basic research" by leaders in the federal research establishment.  It is very difficult to reconcile an objective reading of Stokes definition of use-inspired basic research, with some elements of the federal R&D portfolio for the past decade or two.  Pasteur wasn't playing around in a sandbox with blind faith that a cascade of useful solutions to pressing problems would somehow magically emerge.  He was focused on lines of research relevant to his chosen problem.  Things have begun to improve a bit with regard to federal R&D investments over the past few years, but I'm confident an objective review of the federal R&D portfolio would bring to light a plethora of "R&D investments" that are simply impossible to justify based on prudent public policy.
  5. A demand, in some quarters, that every federal research investment must be successful.  This risk-averse viewpoint, often touted by those claiming to be caretakers of the American Taxpayer, is misguided and whispers a misunderstanding of how scientific discovery, engineering research, and technology development enterprises work.  This relates closely to Schekman's (valid, in my view) lament that many in the government bureaucracy believe discoveries and breakthroughs can be "programmed" and scheduled.
  6. A focus on immediate return on scientific research investment by non-governmental entities.  This is (sort of) the opposite view of the entitlement crowd.  It is held and practiced by many industrial concerns.  "If we can't see a substantial return on our research investment within 2-3 years, we shouldn't be doing it." (I've blogged before about the embarrassing-low levels of research investment by the private sector in the nuclear energy arena.)
So, enough regurgitation of the status quo .  What are my proposed solutions?  

More about that in an upcoming post! :)

Just Thinking &
Happy New Year!


Monday, December 9, 2013

Post # 88: The 3COP Method – Transforming the Oral Presentation Skills of Technical Professionals

I'm a terrible blogger.  Really.

I routinely violate the 1st Rule of Blogging, which  states that one must add new content to one's blog frequently and regularly.  The optimal interval between entries is supposedly no more than one week.  Daily is far better.  Really?

Look down below at the date of my last previous posting – August.  Que pasa?

I am not a "professional blogger".  I follow "3 Simple Rules" for blogging: (1) I do not blog simply to demonstrate I'm alive; (2) I blog when I have a well-structured, fresh, original, or insightful thought to offer; and (3) I honor my readers and never lose site of the privilege bestowed on me by those who actually take time away from the other demands of life to read what I have to say.

So why no blog since August?

Well, it's not because I've quite observing and thinking.  It's because I've been otherwise preoccupied.  Two other endeavors have sequestered my attention during the past several months.  First, we've had some serious family health issues this year.  They are, thankfully, mostly behind us.  But they have deserved and demanded my priority.  Secondly, during the past six months, I've devoted virtually every free moment to (finally) bringing to fruition a vision I've had for well over ten years...

One of the reasons I blog is that I have a passion for communicating technical information to diverse audiences.  One of the most disconcerting observations I've made about technical professionals during the course of my career is that some of the finest scientists, engineers, and project professionals lack the training and skills to excel during those critical oral presentations that punctuate careers and business life cycles in scientific, technical, regulatory, and "high-tech" businesses. 

This really shouldn't be surprising, because most technical professionals have had no formal training in the craft of preparing and delivering effective technical oral presentations.

For well over ten years, I've had a vision for integrating my (now 35 years of) experience as a technical communicator in the energy and research business, with the best thinking of the cognitive science community, to create a method for preparing and delivering high-consequence oral presentations in high-stress environments (think Regulatory & Safety Reviews, Investor/Sponsor Meetings, Best & Final Proposal Presentations, Major Project Reviews, etc.).

I'm pleased to say the vision is now a reality: announcing the "3C-Oral Presentation" or "3COP" Method.  (In case you're wondering, the "3C" stands for CLEAR, CONCISE, and COMPELLING.)   During the second half of this year, I put the finishing touches on 3COP, and began conducting 1-day workshops for my clients at Advanced Technology Insights, LLC.

I'm passionate about the 3COP Method, because I know it can transform the communication effectiveness of our technical communities, strengthen relationships, promote broader understanding of complex issues, advance careers, and build business success for its practitioners.

The 3COP Workshop is an immersive training experience designed to transform the oral presentation skills of technical professionals whose success is captive to their ability to effectively communicate complex, controversial, and detailed information in demanding business and tightly-controlled regulatory environments.  If you and your company are in the 
  • research
  • engineering development
  • regulatory compliance
  • project management, or 
  • high-tech product business,
the 3COP Method is for you!

Please check it out at www.ATInsightsLLC.com today!

Just Thinking...


Saturday, August 24, 2013

Post # 87: The Cost of Higher Education – an Eye Opener

One of my favorite topics is the interaction of technology and society (thus the "byline" for this blog)...  One of the major channels of this interaction is the higher education enterprise.  I've blogged before that I believe the cost of a college degree is, in many cases, outrageously overpriced in both absolute and relative terms.  (I say this from the vantage point of one who is a product of the academic enterprise, who has interacted heavily throughout my professional career with the academic enterprise, and is a parent of young adults who have availed themselves of both traditional 4-yr colleges and the 2-year technical schools.)

I'm convinced many (some might even venture to say most) college degrees aren't worth what they cost - if one measures "worth" in terms of the value society places on the degree.  My basic metric for the value society places on a degree is society's willingness to pay for the exercise of the knowledge and skills supposedly represented by that degree.  (Now of course, the other element of this value proposition is the personal internal fulfillment and satisfaction one gains from the college experience and the knowledge gained therein.  But that's not my topic here...)  Based on my metric, it's abundantly clear a very large percentage of the college degrees being granted in this country today simply aren't worth their cost.

This morning I read one of the most insightful and damning assessments of this situation I've seen. It's an interview with Richard Vedder, of Ohio University and the Center for College Affordability and Productivity.  The interview is documented in an opinion piece by Allysia Finley in the Weekend Edition of the Wall Street Journal.  I encourage you to read it.   Provocative and thought provoking.  Good stuff.

Some have asked about my absence from the blogosphere during the past month...  The answer is that I've been very busy with clients.  I have also been putting the final touches on new 1-day workshop I'll be offering in the near future.  The workshop, entitled, "Mastering High-Stakes Oral Presentations for Scientific, Technical, and Regulatory Professionals," is a 1-day equipping event, designed to transform individual and organizational oral presentation effectiveness when and where it matters most.  More about this in a future blog...

Just thinking...