22 stories
·
1 follower

The Tragedy of Stafford Beer

2 Shares

SIGMA Moves

During the pandemic, I was seduced by a charming British management consultant. A debonair James Bond-type who went from driving a Rolls Royce around his countryside estate to orchestrating the Chilean economic experiment under Allende to teaching Brian Eno about the principles of complex systems in a stone cottage in Wales. Stafford Beer lived a remarkable life,

What the abandonment of the pinnacle of capitalist achievement for the most realistic effort to build cybernetic socialism does to a mfer.

The recent 50th anniversary of the 1973 Coup in Chile has sparked a resurgence of interest in Beer, who is most famous for running Project Cybersyn and thus reforming Chile’s economy under Allende. There has been growing interest in cybernetic socialism over the past decade, starting with Eden Medina’s history Cybernetic Revolutionaries, Evgeny Morozov’s article on Cybersyn in the New Yorker, and a chapter on the enterprise in Leigh Phillips’ and Michal Rozworski’s book The People’s Republic of Walmart. Theory streetwear brand Boot Boyz Biz even put out a Cybersyn throw rug.

Just this month, Morozov released an extensive and entertaining retrospective of the entire period in the form of a nine-episode podcast. Although I enjoyed the whole thing, I found the narrative account of Beer’s response to the coup and the general distrust of Allende in the UK most illuminating.

Like many of the postwar behavioral scientists, World War II was a major shock to Beer’s understanding of the tradeoffs between efficiency and command, the last society-wide example of alternative models of organization before the establishment of the American-led capitalism that has dominated during all of our lifetimes. The capacity of this style of systems analysis to enhance organizational capacity might well extend beyond the military sphere.

After the war, Beer became one of the leading practitioners of operations research (something you can specialize in business school today) and worked to optimize the operations of factories and larger corporate organizations. He saw huge success working with United Steel in the 1950s before leaving to found a consulting company—amusingly—called SIGMA.

Much of his management research published in the 1960s and 1970s, including his monumental The Brain of the Firm, has been influential in the theory of management. In Morozov’s telling, Beer’s decision to take time away from this successful career is puzzling; the Stafford Beer of The Santiago Boys is motivated to come to Chile by a combination of professional curiosity at being given the reins of an entire country to test out his theories and some vague lefty sympathies developed as a result of his time stationed in the British Raj. This is narratively convenient, but my read of Beer’s project is significantly more radical—and the result all the more tragic.

Although I’ve been a fan for a few years now, I only just got around to Beer’s 1975 book Platform for Change. This is one of the strangest and most gut-wrenching books I’ve ever encountered.

As someone who reads and writes for a living, in the hope that these actions will change the world for the better, I occasionally experience an odd emotion when I read something that’s decades old that condenses some thought I’ve been grasping towards. Part of this emotion is simple scientific envy, that someone beat me to it; part of it is pride, of wanting to share the idea with my peers and enjoy some reflection of its glory. Beneath these feelings, though, is despair – despair that someone else, someone more famous and smarter and older than me already wrote this idea down and yet it didn’t matter.

Stafford Beer’s writing, career trajectory and just overall weltanschauung had this effect on me. The man:

  • Was born into aristocratic British society and thus had establishment connections and legitimacy.
  • Correctly diagnosed a variety of fundamental incompatibilities between our inherited institutions, the contemporary scale of our societies and rapid change in communication technology.
  • Developed an impressive reputation, giving over 500 invited lectures and frequent media appearances, as well as a flourishing and remunerative career as a business consultant.
  • Cashed out this reputation and connections in a series of increasingly transgressive attacks on what he saw as a complacent and unscientific establishment. (His inaugural address, upon being elected president of the Operational Research Society, is particularly spicy.)
  • Actually had the chance to implement his radical ideas at the level of a medium-sized country, and demonstrate at least preliminary evidence for their effectiveness.

The vibe of the back flap of Platform for Change:

I am fed up with hiding myself, an actual human being, behind the conventional anonymity of scholarly authorship

Platform for Change is the culmination of Beer’s project, a manifesto over which he insisted on having total creative control. It’s a testament to both the spirit of the 1970s and the popularity of his previous books that he managed to get a 500-page tome published, let alone one with four different colored pages.

The gold pages, like these, are written in the metalanguage. This is necessary because of the technical inability of our existing concepts, language and medium of communication to make the critique that he needs to make, to propose the solutions that actually address the problem.

I recognize this impulse in myself and in my generation of online-first intellectual activists. Once you start thinking hard and pushing up against the limits of the tools you have been given to think with, it becomes clear that you need to fashion some new tools. This, of course, is a blog about meta-science, where I have frequently argued that the media technologies of linear natural language text and in-line citation are incompatible with a subject as fast-moving as social media.

Beer correctly argues that the format which serves novels (which unfold in temporal sequence) is incompatible with explanations that involve dynamic feedback. He knows that writing a weird book with lots of baroque systems diagrams and color-coded pages will alienate a huge portion of his audience, but he does it anyway, because he thinks it’s necessary to make the point he wants to make. It’s an incredibly long and detailed book, and technically impossible to summarize with normal language, but I found it extremely impressive and compelling.

And yet he failed. Platform for Change was a commercial disaster (obviously), and his serious scholarly output fell off sharply. He was only in his late 40s, potentially at the peak of his career; he could easily have coasted into a career as either a senior management consultant making millions or a contrarian public intellectual.

But the experience seemed to break him; the willfulness that had been necessary to go against the grain won out against the negative feedback he was getting from his increasingly outré efforts. Morozov captures the bathos of Beer’s stubborn insistence on living in that stone cabin in Wales, without running water or a telephone, cut off from the world.

There is a deep, cybernetic irony in this story. Beer’s entire approach to “viable systems” is that they need to adapt to shocks without becoming in some way denatured. A common problem is overcorrection: consider the way that the US responded to 9/11, the permanent expansion of the security state.

Beer’s problem seems to have been the opposite: undercorrection. Once he had settled on the belief system articulated in Platform for Change, there was no shaking it—his refusal to compromise led him down the path from mainstream management consultant to renegade intellectual activist, all the way down to lonely poet-mystic drinking himself to death in the Welsh countryside.

I’ve written about Beer’s concrete proposals for cybernetic government before, and I expand on them below. But the main point of this post is to reflect on the intersection between meta-scientific reform, the history of thought and the sociology of intellectual work inside and outside the academy. Beer’s experience is extremely relevant for anyone trying to change the relationship between academia, science, technology and society, as a guide to both understand the past and orient oneself for the future.

It happens that the Center for Information Technology and Policy, where I’m visiting at Princeton this year, shares a building with the Department of Operations Research and Financial Engineering (ORFE). I was excited to learn this – a chance to walk downstairs and talk to someone about Operations Research, this strange postwar discipline which seems to have absorbed the cybernetic impulse that had flourished across the social sciences and hidden it away in business schools?

But one of the grad students told me that actually the name is somewhat anachronistic—no one really does Operations Research anymore, it’s all just financial engineering aka math and stats. Indeed, it looks as though 1970, the year Stafford Beer was elected president of the Operational Research Society in the UK, was the peak of the intellectual movement.

Cybernetics, at least, is making a bit a comeback.

Can you recognize an Angel?

Designing Freedom, Beer’s 1974 book, is far more accessible than Platform for Change, and indeed should be required reading for students of economics, political science and the history of science. The book outlines the way that we can (and must) use new information technology to design freedom. Beer conceives of a networked economy and cultural sphere that enhances human capacities without warping human desires.

Beer thus follows the humanist tradition established by Norbert Wiener, the grandfather of cybernetics. Weiner immediately understood the implications of his development; in Cybernetics, he describes his attempts to explain what was coming to the heads of the country’s largest labor unions. They didn’t understand or attempt to use cybernetic principles to enhance the effectiveness of their organizations.

Weiner’s second book is called The Human Use of Human Beings, an even broader attempt at outlining a society built on cybernetic principles: the most effective society, he argued, would be able to make full use of each human’s capacities rather than forcing the overwhelming majority of humans to contribute only tightly controlled and alienated labor.

Designing Freedom continues along these lines, a crucial intervention in the tired 20th century debate between government control and “liberty.” Beer talks about a Liberty Machine that is used to create liberty. That is, we should understand that our society is “not an entity characterized by more or less constraint, but a dynamic viable system that has liberty as its output.”

This fatalism is understandable. Capitalism and technology have produced a globe with eight billion humans and no way to slow down. It can feel like there is zero margin for experimentation, that any move away from the present course could spell disaster.

If we want to both radically restructure society and avoid billions of deaths, we have to center concerns about scale. The definition of “Big Data” is often given as “more data than you can fit in your machine’s active memory,” but Beer has another threshold in mind, one not ameliorated by Moore’s Law: the human brain.

“Recognize ecological systems” and “Undertake world government” strike me as more immediately relevant, but Beer’s transcendent weirdness peeks through with the phrase “recognize an angel.” Threading the humanist needle requires recognizing that our capacities as humans are indeed unique among beings and yet woefully inadequate for the tasks we ask of individual humans.

But the individual human brain remains a crucial bottleneck for the flow of information. Given certain biological constraints, it is impossible to centralize all of society’s necessary information storage and processing within the brain of a single individual. This problem can be addressed by groups and technology, but this is usually done in wasteful or destructive fashion.

Strict bureaucratic control can enhance the scale at which organizations can act—at the cost of the speed of the reaction or the scope of the action space. Complete decentralization limits the capacity of the organization to pursue strategic objectives, generally collapsing into autarky and self-interest. Designing freedom, in Beer’s view, means designing institutions that allow humans to use their full capacities as humans to regulate society, collectively and at scale.

There is a branch of the anti-tech left that rejects this conception of control-as-regulation, in the naive view that these forces can be contained. (Indeed, both Wiener and Beer were criticized along these lines by the still-Sovietpilled left during their lifetimes.) This view descends from the ironic tendency of some anti-Anglo-imperialists to view the United States government as the only agent of consequence, with a cabal of capitalists directing its violent energies. Morozov’s podcast falls into this trap; while the significant efforts by the CIA he describes did in fact happen, the first episode whips back and forth between the leftist revolutionaries “realizing” that the CIA would “never” allow Allende to win election fairly to Allende winning election fairly.

Beer correctly diagnoses this impulse, one that I detest. It is moral narcissism for intellectuals to exhaust their human capital endowments debating about how they can minimize their own sins while the forces of technocapital grow stronger by the day. Beer’s now-quaint description of the threats we face illustrate just how badly we have been losing over the past fifty years—the “Electric Mafia” he fears is easily recognizable in the control technologies of algorithmic feeds and product recommendations

“What is to be done with cybernetics, the science of effective organization? Should we all stand by complaining, and wait for someone malevolent to take it over and enslave us? An electronic mafia lurks around that corner.”

“We allow publishers to file away electronically masses of information about ourselves—who we are, what are our interests—and to tie that in with mail order schemes, credit systems, and advertising campaigns that line us all up like a row of ducks to be picked off in the interests of conspicuous consumption.”

In the absence of an intentional, humanity-enhancing system of electronic communication technology, powerful entities both governmental and corporate will use those technologies to erode our humanity and thus deprive us of liberty without ever resorting to the “man with a gun” who is the primary avatar of unfreedom among people who fear centralized state power.

This is the same kind of insight that I described as “the cybernetic event horizon” in Flusser’s work, and in the same spirit that Deleuze discusses “control” and Foucault “governmentality” (though Beer and Flusser both got there before these canonized wordcels).

Today, it is clear that fears of an “electric mafia” were if anything understated. Effects which are commonly attributed solely to the internet and social media have in fact been developing for decades. But modern communication technology obviously accelerates the trend.

Mark Zuckerberg didn’t wake up one day and decide to cause teenage girls to have panic attacks about not getting enough Likes on their Instagram posts; he and the other electric dons designed and piloted these systems whose output was human anxiety rather than human freedom.

AND DON’T TELL ANYONE ELSE UNLESS I SAY SO

 

Beer’s vision for how communication technology could be used to enhance human freedom doesn’t look all that different from how we interact with computers and smart devices today. The design of the user interface, replete with armchair and cigar, is perhaps a bit whimsical compared our usual set-up, but the ability to look up information, do a deeper dive on news items, play music and play long-distance chess are all things we can do today.

The one exception is the ease with which this setup integrates information into the user’s tax file. One of Beer’s crucial insights is that increasing the total number of channels of communication only tends to produce confusion and give private actors more leverage. Personal income taxes are an old form of communication between citizens and the state, and they are today such a source of annoyance because they haven’t been overhauled to fit within the rest of the societal apparatus for collecting and transmitting information. Concerns about privacy are misplaced; at present, the only entities who are cross-walking all of the information about individual people are corporate data brokers and the big companies that pay them; doing all of the relevant data collection and synthesis up front would both empower government action and undercut the corporations’ advantage.

But the bigger difference between Beer’s dream and the status quo is the phrase “AND DON’T TELL ANYONE ELSE UNLESS I SAY SO.” This isn’t specifically a privacy concern; he’s also opposing the collection of anonymized aggregate data. The crucial mechanism, what would allow the creation of tools which empower humans to act without warping our desires and indeed our natures, is to cut this channel of feedback from the individual back into the apparatus.

My normative commitment is to human freedom, human agency. The more of ourselves exist in the apparatus, the more agency we offload to it. The radical rejection of technology is simply not an option on a planet of 8 billion. Individual efforts to avoid using modern communication technology are quixotic and ultimately self-destructive, as Stafford Beer’s descent into pastoral mysticism illustrates. The best path I see requires something like his approach to effective communication and governance, to think of engineering a society which produces human freedom while ensuring homeostatic stability.

Read the whole story
clonedrad
205 days ago
reply
Share this story
Delete

Russ Allbery: Effective altruism and the control trap

1 Comment

William MacAskill has been on a book tour for What We Owe to the Future, which has put effective altruism back in the news. That plus the decision by GiveWell to remove GiveDirectly from their top charity list got me thinking about charity again. I think effective altruism, by embracing long-termism, is falling into an ethical trap, and I'm going to start heavily discounting their recommendations for donations.

Background

Some background first for people who have no idea what I'm talking about.

Effective altruism is the idea that we should hold charities accountable for effectiveness. It's not sufficient to have an appealing mission. A charity should demonstrate that the money they spend accomplishes the goals they claimed it would. There is a lot of debate around defining "effective," but as a basic principle, this is sound. Mainstream charity evaluators such as Charity Navigator measure overhead and (arguable) waste, but they don't ask whether the on-the-ground work of the charity has a positive effect proportional to the resources it's expending. This is a good question to ask.

GiveWell is a charity research organization that directs money for donors based on effective altruism principles. It's one of the central organizations in effective altruism.

GiveDirectly is a charity that directly transfers money from donors to poor people. It doesn't attempt to build infrastructure, buy specific things, or fund programs. It identifies poor people and gives them cash with no strings attached.

Long-termism is part of the debate over what "effectiveness" means. It says we should value impact on future generations more highly than we tend to do. (In other words, we should have a much smaller future discount rate.) A sloppy but intuitive expression of long-termism is that (hopefully) there will be far more humans living in the future than are living today, and therefore a "greatest good for the greatest number" moral philosophy argues that we should invest significant resources into making the long-term future brighter. This has obvious appeal to those of us who are concerned about the long-term impacts of climate change, for example.

There is a lot of overlap between the communities of effective altruism, long-termism, and "rationalism." One way this becomes apparent is that all three communities have a tendency to obsess over the risks of sentient AI taking over the world. I'm going to come back to that.

Psychology of control

GiveWell, early on, discovered that GiveDirectly was measurably more effective than most charities. Giving money directly to poor people without telling them how to spend it produced more benefits for those people and their surrounding society than nearly all international aid charities.

GiveDirectly then became the baseline for GiveWell's evaluations, and GiveWell started looking for ways to be more effective than that. There is some logic to thinking more effectiveness is possible. Some problems are poorly addressed by markets and too large for individual spending. Health care infrastructure is an obvious example.

That said, there's also a psychological reason to look for other charities. Part of the appeal of charity is picking a cause that supports your values (whether that be raw effectiveness or something else). Your opinions and expertise are valued alongside your money. In some cases, this may be objectively true. But in all cases, it's more flattering to the ego than giving poor people cash.

At that point, the argument was over how to address immediate and objectively measurable human problems. The innovation of effective altruism is to tie charitable giving to a research feedback cycle. You measure the world, see if it is improving, and adjust your funding accordingly. Impact is measured by its effects on actual people. Effective altruism was somewhat suspicious of talking directly to individuals and preferred "objective" statistical measures, but the point was to remain in contact with physical reality.

Enter long-termism: what if you could get more value for your money by addressing problems that would affect vast numbers of future people, instead of the smaller number of people who happen to be alive today?

Rather than looking at the merits of that argument, look at its psychology. Real people are messy. They do things you don't approve of. They have opinions that don't fit your models. They're hard to "objectively" measure. But people who haven't been born yet are much tidier. They're comfortably theoretical; instead of having to go to a strange place with unfamiliar food and languages to talk to people who aren't like you, you can think hard about future trends in the comfort of your home. You control how your theoretical future people are defined, so the results of your analysis will align with your philosophical and ideological beliefs.

Problems affecting future humans are still extrapolations of problems visible today in the world, though. They're constrained by observations of real human societies, despite the layer of projection and extrapolation. We can do better: what if the most serious problem facing humanity is the possible future development of rogue AI?

Here's a problem that no one can observe or measure because it's never happened. It is purely theoretical, and thus under the control of the smart philosopher or rich western donor. We don't know if a rogue AI is possible, what it would be like, how one might arise, or what we could do about it, but we can convince ourselves that all those things can be calculated with some probability bar through the power of pure logic. Now we have escaped the uncomfortable psychological tension of effective altruism and returned to the familiar world in which the rich donor can define both the problem and the solution. Effectiveness is once again what we say it is.

William MacAskill, one of the originators of effective altruism, now constantly talks about the threat of rogue AI. In a way, it's quite sad.

Where to give money?

The mindset of long-termism is bad for the human brain. It whispers to you that you're smarter than other people, that you know what's really important, and that you should retain control of more resources because you'll spend them more wisely than others. It's the opposite of intellectual humility. A government funding agency should take some risks on theoretical solutions to real problems, and maybe a few on theoretical solutions to theoretical problems (although an order of magnitude less). I don't think this is a useful way for an individual donor to think.

So, if I think effective altruism is abandoning the one good idea it had and turning back into psychological support for the egos of philosophers and rich donors, where does this leave my charitable donations?

To their credit, GiveWell so far seems uninterested in shifting from concrete to theoretical problems. However, they believe they can do better by picking projects than giving people money, and they're committing to that by dropping GiveDirectly (while still praising them). They may be right. But I'm increasingly suspicious of the level of control donors want to retain. It's too easy to trick oneself into thinking you know better than the people directly affected.

I have two goals when I donate money. One is to make the world a better, kinder place. The other is to redistribute wealth. I have more of something than I need, and it should go to someone who does need it. The net effect should be to make the world fairer and more equal.

The first goal argues for effective altruism principles: where can I give money to have the most impact on making the world better? The second goal argues for giving across an inequality gradient. I should find the people who are struggling the most and transfer as many resources to them as I can. This is Peter Singer's classic argument for giving money to the global poor.

I think one can sometimes do better than transferring money, but doing so requires a deep understanding of the infrastructure and economies of scale that are being used as leverage. The more distant one is from a society, the more dubious I think one should be of one's ability to evaluate that, and the more wary one should be of retaining any control over how resources are used.

Therefore, I'm pulling my recurring donation to GiveWell. Half of it is going to go to GiveDirectly, because I think it is an effective way of redistributing wealth while giving up control. The other half is going to my local foodbank, because they have a straightforward analysis of how they can take advantage of economy of scale, and because I have more tools available (such as local news) to understand what problem they're solving and if they're doing so effectively.

I don't know that those are the best choices. There are a lot of good ones. But I do feel strongly that the best charity comes from embracing the idea that I do not have special wisdom, other people know more about what they need than I do, and deploying my ego and logic from the comfort of my home is not helpful. Find someone who needs something you have an excess of. Give it to them. Treat them as equals. Don't retain control. You won't go far wrong.

Read the whole story
clonedrad
579 days ago
reply
Good thoughts.
Share this story
Delete

Is the world poor, or unjust?

4 Shares
 

Social media has been ablaze with this question recently.  We know we face a crisis of mass poverty: the global economy is organized in such a way that nearly 60% of humanity is left unable to meet basic needs.  But the question at stake this time is different.  A couple of economists on Twitter have claimed that the world average income is $16 per day (PPP).  This, they say, is proof that the world is poor in a much more general sense: there is not enough for everyone to live well, and the only way to solve this problem is to press on the accelerator of aggregate economic growth.

This narrative is, however, hobbled by several empirical problems. 

1. $16 per day is not accurate

First, let me address the $16/day claim on its own terms.  This is a significant underestimate of world average income.  The main problem is that it relies on household surveys, mostly from Povcal.  These surveys are indispensable for telling us about the income and consumption of poor and ordinary households, but they do not capture top incomes, and are not designed to do so. In fact, Povcal surveys are not even really legitimate for capturing the income of “normal” high-income households.  Using this method gives us a total world household income of about $43 trillion (PPP).  But we know that total world GDP is $137 trillion (PPP).  So, about two-thirds of global income is unaccounted for.

What explains this discrepancy?  Some of the “missing” income is the income of the global rich.  Some of it is consumption that’s related to housing, NGOs, care homes, boarding schools, etc, which are also not captured by these surveys (but which are counted as household consumption in national accounts).  The rest of it is various forms of public expenditure and public provisioning.

This final point raises a problem that’s worth addressing.  The survey-based method mixes income- and consumption-based data. Specifically, it counts non-income consumption in poor countries (including from commons and certain kinds of public provisioning), but does not count non-income consumption or public provisioning in richer countries.  This is not a small thing.  Consider people in Finland who are able to access world-class healthcare and higher education for free, or Singaporeans who live in high-end public housing that’s heavily subsidized by the government. The income equivalent of this consumption is very high (consider that in the US, for instance, people would have to pay out of pocket for it), and yet it is not captured by these surveys.  It just vanishes. 

Of course, not all government expenditure ends up as beneficial public provisioning. A lot of it goes to wars, arms, fossil fuel subsidies and so on.  But that can be changed.  There’s no reason that GDP spent on wars could not be spent on healthcare, education, wages and housing instead.

For these reasons, when assessing the question of whether the world is poor in terms of income, it makes more sense to use world average GDP, which is $17,800 per capita (PPP). Note that this is roughly consistent with the World Bank’s definition of a “high-income” country.  It is also well in excess of what is required for high levels of human development.  According to the UNDP, some nations score “very high” (0.8 or above) on the life expectancy index with as little as $3,300 per capita, and “very high” on the education index with as little as $8,700 per capita.  In other words, the world is not poor, in aggregate. Rather, income is badly maldistributed. 

To get a sense for just how badly it is maldistributed, consider that the richest 1% alone capture nearly 25% of world GDP, according to the World Inequality Database.  That’s more than the GDP of 169 countries combined, including Norway, Argentina, all of the Middle East and the entire continent of Africa.  If income was shared more fairly (i.e., if more of it went to the workers who actually produce it), and invested in universal public goods, we could end global poverty many times over and close the health and education gap permanently.  

2. GDP accounting does not reflect economic value

But even GDP accounting is not adequate to the task of determining whether or not the world is poor.  The reason is because GDP is not an accurate reflection of value; rather, it is a reflection of prices, and prices are an artefact of power relations in political economy.  We know this from feminist economists, who point out that labour and resources mobilized for domestic reproduction, primarily by women, is priced at zero, and therefore “valued” at zero in national accounts, even though it is in reality essential to our civilization. We also know this from literature on unequal exchange, which points out that capital leverages geopolitical and commercial monopolies to artificially depress or “cheapen” the prices of labour in the global South to below the level of subsistence.

Let me illustrate this latter point with an example.  Beginning in the 1980s, the World Bank and IMF (which are controlled primarily by the US and G7), imposed structural adjustment programmes across the global South, which significantly depressed wages and commodity prices (cutting them in half) and reorganized Southern economies around exports to the North. The goal was to restore Northern access to the cheap labour and resources they had enjoyed during the colonial era.  It worked: during the 1980s the quantity of commodities that the South exported to the North increased, and yet their total revenues on this trade (i.e., the GDP they received for it) declined.  In other words, by depressing the costs of Southern labour and commodities, the North is able to appropriate a significant quantity effectively for free

The economist Samir Amin described this as “hidden value”.  David Clelland calls it “dark value” – in other words, value that is not visible at all in national or corporate accounts.  Just as the value of female domestic labour is “hidden” from view, so too are the labour and commodities that are net appropriated from the global South.  In both cases, prices do not reflect value.  Clelland estimates that the real value of an iPad, for example, is many times higher than its market price, because so much of the Southern labour that goes into producing it is underpaid or even entirely unpaid.  John Smith points out that, as a result, GDP is an illusion that systematically underestimates real value.

There is a broader fact worth pointing to here.  The whole purpose of capitalism is to appropriate surplus value, which by its very nature requires depressing the prices of inputs to a level below the value that capital actually derives from them.  We can see this clearly in the way that nature is priced at zero, or close to zero (consider deforestation, strip mining, or emissions), despite the fact that all production ultimately derives from nature.  So the question is, why should we use prices as a reflection of global value when we know that, under capitalism, prices by their very definition do not reflect value? 

We can take this observation a step further.  To the extent that capitalism relies on cheapening the prices of labour and other inputs, and to the extent that GDP represents these artificially low prices, GDP growth will never eradicate scarcity because in the process of growth scarcity is constantly imposed anew.

So, if GDP is not an accurate measure of the value of the global economy, how can we get around this problem?  One way is to try to calculate the value of hidden labour and resources.  There have been many such attempts.  In 1995, the UN estimated that unpaid household labour, if compensated, would earn $16 trillion in that year. More recent estimates have put it at many times higher than that.  Similar attempts have been made to value “ecosystem services”, and they arrive at numbers that exceed world GDP.  These exercises are useful in illustrating the scale of hidden value, but they bump up against a problem.  Capitalism works precisely because it does not pay for domestic labour and ecosystem services (it takes these things for free).  So imagining a system in which these things are paid requires us to imagine a totally different kind of economy (with a significant increase in the money supply and a significant increase in the price of labour and resources), and in such an economy money would have a radically different value. These figures, while revealing, compare apples and oranges.

3. What matters is resources and provisioning

There is another approach we can use, which is to look at the scale of the useful resources that are mobilized by the global economy.  This is preferable, because resources are real and tangible and can be accurately counted. Right now, the world economy uses 100 billion tons of resources per year (i.e., materials processed into tangible goods, buildings and infrastructure).  That’s about 13 tons per person on average, but it is highly unequal: in low and lower-middle income countries it’s about 2 tons, and in high-income countries it’s a staggering 28 tons. Research in industrial ecology indicates that high standards of well-being can be achieved with about 6-8 tons per per person.  In other words, the global economy presently uses twice as much resources as would be required to deliver good lives for all.

We see the same thing when it comes to energy.  The world economy presently uses 400 EJ of energy per year, or 53 GJ per person on average (again, highly unequal between North and South).  Recent research shows that we could deliver high standards of welfare for all, with universal healthcare, education, housing, transportation, computing etc, with as little as 15 GJ per capita.  Even if we raise that figure by 75% to be generous it still amounts to a global total of only 26 GJ. In other words, we presently use more than double the energy that is required to deliver good lives for everyone.

When we look at the world in terms of real resources and energy (i.e., the stuff of provisioning), it becomes clear that there is no scarcity at all.  The problem isn’t that there’s not enough, the problem, again, is that it is maldistributed. A huge chunk of global commodity production is totally irrelevant to human needs and well-being.  Consider all the resources and energy that are mobilized for the sake of fast fashion, throwaway gadgets, single-use stadiums, SUVs, bottled water, cruise ships and the military-industrial complex.  Consider the scale of needless consumption that is stimulated by manipulative advertising schemes, or enforced by planned obsolescence.  Consider the quantity of private cars that people have been forced to buy because the fossil fuel industry and automobile manufactures have lobbied so aggressively against public transportation. Consider that the beef industry alone uses nearly 60% of the world’s agricultural land, to produce only 2% of global calories.

There is no scarcity. Rather, the world’s resources and energy are appropriated (disproportionately from the global South) in order to service the interests of capital and affluent consumers (disproportionately in the global North).  We can state it more clearly: our economic system is not designed to meet human needs; it is designed to facilitate capital accumulation. And in order to do so, it imposes brutal scarcity on the majority of people, and cheapens human and nonhuman life.  It is irrational to believe that simply “growing” such an economy, in aggregate, will somehow magically achieve the social outcomes we want.

We can think of this in terms of labour, too.  Consider the labour that is rendered by young women in Bangladeshi sweatshops to produce fast fashion for Northern consumption; and consider the labour rendered by Congolese miners to dig up coltan for smartphones that are designed to be tossed every two years.  This is an extraordinary waste of human lives. Why? So that Zara and Apple can post extraordinary profits.

Now imagine what the world would be like if all that labour, resources and energy was mobilized instead around meeting human needs and improving well-being (i.e., use-value rather than exchange-value).  What if instead of appropriating labour and resources for fast fashion and Alexa devices it was mobilized around providing universal healthcare, education, public transportation, social housing, organic food, water, energy, internet and computing for all?  We could live in a highly educated, technologically advanced society with zero poverty and zero hunger, all with significantly less resources and energy than we presently use. In other words we could not only achieve our social goals, but we could meet our ecological goals too, reducing excess resource use in rich countries to bring them back within planetary boundaries, while increasing resource use in the South to meet human needs.

There is no reason we cannot build such a society (and it is achievable, with concrete policy, as I describe here, here and here), except for the fact that those who benefit so prodigiously from the status quo do everything in their power to prevent it.

 

Read the whole story
clonedrad
1128 days ago
reply
Share this story
Delete

Jonathan Cook – A Lesson Coronavirus is about to Teach the World

1 Comment

A classic weekend read: insightful and radical

Jonathan Cook is an award-winning British journalist based in Nazareth, Israel, since 2001

Cross-posted from Jonathan’s website

Coronavirus: Olivgrüne Ausgangsperre?

If a disease can teach wisdom beyond our understanding of how precarious and precious life is, the coronavirus has offered two lessons.

The first is that in a globalised world our lives are so intertwined that the idea of viewing ourselves as islands – whether as individuals, communities, nations, or a uniquely privileged species – should be understood as evidence of false consciousness. In truth, we were always bound together, part of a miraculous web of life on our planet and, beyond it, stardust in an unfathomably large and complex universe.

It is only an arrogance cultivated in us by those narcissists who have risen to power through their own destructive egotism that blinded us to the necessary mix of humility and awe we ought to feel as we watch a drop of rain on a leaf, or a baby struggle to crawl, or the night sky revealed in all its myriad glories away from city lights.

And now, as we start to enter periods of quarantine and self-isolation – as nations, communities and individuals – all that should be so much clearer. It has taken a virus to show us that only together are we at our strongest, most alive and most human.

In being stripped of what we need most by the threat of contagion, we are reminded of how much we have taken community for granted, abused it, hollowed it out. We are afraid because the services we need in times of collective difficulty and trauma have been turned into commodities that require payment, or treated as privileges to which access is now means-tested, rationed or is simply gone. That insecurity is at the root of the current urge to hoard.

When death stalks us it is not bankers we turn to, or corporate executives, or hedge fund managers. Nonetheless, those are the people our societies have best rewarded. They are the people who, if salaries are a measure of value, are the most prized.

But they are not the people we need, as individuals, as societies, as nations. Rather, it will be doctors, nurses, public health workers, care-givers and social workers who will be battling to save lives by risking their own.

During this health crisis we may indeed notice who and what is most important. But will we remember the sacrifice, their value after the virus is no longer headline news? Or will we go back to business as usual – until the next crisis – rewarding the arms manufacturers, the billionaire owners of the media, the fossil fuel company bosses, and the financial-services parasites feeding off other people’s money?

‘Take it on the chin’

The second lesson follows from the first. Despite everything we have been told for four decades or more, western capitalist societies are far from the most efficient ways of organising ourselves. That will be laid bare as the coronavirus crisis deepens.

We are still very much immersed in the ideological universe of Thatcherism and Reaganism, when we were told quite literally: “There is no such thing as society.” How will that political mantra stand the test of the coming weeks and months? How much can we survive as individuals, even in quarantine, rather than as part of communities that care for all of us?

Western leaders who champion neoliberalism, as they are required to do nowadays, have two choices to cope with coronavirus – and both will require a great deal of misdirection if we are not to see through their hypocrisy and deceptions.

Our leaders can let us “take it on the chin”, as the British prime minister Boris Johnson has phrased it. In practice, that will mean allowing what is effectively a cull of many of the poor and elderly – one that will relieve governments of the financial burden of underfunded pension schemes and welfare payments.

Such leaders will claim they are powerless to intervene or to ameliorate the crisis. Confronted with the contradictions inherent in their worldview, they will suddenly become fatalists, abandoning their belief in the efficacy and righteousness of the free market. They will say the virus was too contagious to contain, too robust for health services to cope, too lethal to save lives. They will evade all blame for the decades of health cuts and privatisations that made those services inefficient, inadequate, cumbersome and inflexible.

Or, by contrast, politicians will use their spin doctors and allies in the corporate media to obscure the fact that they are quietly and temporarily becoming socialists to deal with the emergency. They will change the welfare rules so that all those in the gig economy they created – employed on zero-hours contracts – do not spread the virus because they cannot afford to self-quarantine or take days’ off sick.

Or most likely our leaders will pursue both options.

Permanent crisis

If acknowledged at all, the conclusion to be draw from the crisis – that we all matter equally, that we need to look after one another, that we sink or swim together – will be treated as no more than an isolated, fleeting lesson specific to this crisis. Our leaders will refuse to draw more general lessons – ones that might highlight their own culpability – about how sane, humane societies should function all the time.

In fact, there is nothing unique about the coronavirus crisis. It is simply a heightened version of the less visible crisis we are now permanently mired in. As Britain sinks under floods each winter, as Australia burns each summer, as the southern states of the US are wrecked by hurricanes and its great plains become dustbowls, as the climate emergency becomes ever more tangible, we will learn this truth slowly and painfully.

Those deeply invested in the current system – and those so brainwashed they cannot see its flaws – will defend it to the bitter end. They will learn nothing from the virus. They will point to authoritarian states and warn that things could be far worse.

They will point a finger at Iran’s high death toll as confirmation that our profit-driven societies are better, while ignoring the terrible damage we have inflicted on Iran’s health services after years of sabotaging its economy through ferocious sanctions. We left Iran all the more vulnerable to coronavirus  because we wanted to engineer “regime change” – to interfere under the pretence of “humanitarian” concern – as we have sought to do in other countries whose resources we wished to control, from Iraq to Syria and Libya.

Iran will be held responsible for a crisis we willed, that our politicians intended (even if the speed and means came as a surprise), to overthrow its leaders. Iran’s failures will be cited as proof of our superior way of life, as we wail self-righteously about the outrage of a “Russian interference” whose contours we can barely articulate.

Valuing the common good

Those who defend our system, even as its internal logic collapses in the face of coronavirus and a climate emergency, will tell us how lucky we are to live in free societies where some – Amazon executives, home delivery services, pharmacies, toilet-paper manufacturers – can still make a quick buck from our panic and fear. As long as someone is exploiting us, as long as someone is growing fat and rich, we will be told the system works – and works better than anything else imaginable.

But in fact, late-stage capitalist societies like the US and the UK will struggle to claim even the limited successes against coronavirus of authoritarian governments. Is Trump in the US or Johnson in the UK – exemplars of “the market knows best” capitalism – likely to do better than China at containing and dealing with the virus?

This lesson is not about authoritarian versus “free” societies. This is about societies that treasure the common wealth, that value the common good, above private greed and profit, above protecting the privileges of a wealth-elite.

In 2008, after decades of giving the banks what they wanted – free rein to make money by trading in hot air – the western economies all but imploded as an inflated bubble of empty liquidity burst. The banks and financial services were saved only by public bail-outs – tax payers’ money. We were given no choice: the banks, we were told, were “too big to fail”.

We bought the banks with our common wealth. But because private wealth is our era’s guiding star, the public were not allowed to own the banks they bought. And once the banks had been bailed out by us – a perverse socialism for the rich – the banks went right back to making private money, enriching a tiny elite until the next crash.

Nowhere to fly to

The naive may think this was a one-off. But the failings of capitalism are inherent and structural, as the virus is already demonstrating and the climate emergency will drive home with alarming ferocity in the coming years.

The shut-down of borders means the airlines are quickly going bust. They didn’t put money away for a rainy day, of course. They didn’t save, they weren’t prudent. They are in a cut-throat world where they need to compete with rivals, to drive them out of business and make as much money as they can for shareholders.

Now there is nowhere for the airlines to fly to – and they will have no visible means to make money for months on end. Like the banks, they are too big to fail – and like the banks they are demanding public money be spent to tide them over until they can once again rapaciously make profits for their shareholders. There will be many other corporations queuing up behind the airlines.

Sooner or later the public will be strong-armed once again to bail out these profit-driven corporations whose only efficiency is the central part they play in fuelling global warming and eradicating life on the planet. The airlines will be resuscitated until the inevitable next crisis arrives – one in which they are key players.

A boot stamping on a face

Capitalism is an efficient system for a tiny elite to make money at a terrible cost, and an increasingly untenable one, to wider society – and only until that system shows itself to be no longer efficient. Then wider society has to pick up the tab, and assist the wealth-elite so the cycle can be begun all over again. Like a boot stamping on a human face – forever, as George Orwell warned long ago.

But it is not just that capitalism is economically self-destructive; it is morally vacant too. Again, we should study the exemplars of neoliberal orthodoxy: the UK and the US.

In Britain, the National Health Service – once the envy of the world – is in terminal decline after decades of privatising and outsourcing its services. Now the same Conservative party that began the cannibalising of the NHS is pleading with businesses such as car makers to address a severe shortage of ventilators, which will soon be needed to assist coronavirus patients.

Once, in an emergency, western governments would have been able to direct resources, both public and private, to save lives. Factories could have been repurposed for the common good. Today, the government behaves as if all it can do is incentivise business, pinning hopes on the profit motive and selfishness driving these firms to enter the ventilator market, or to provide beds, in ways beneficial to public health.

The flaws in this approach should be glaring if we examine how a car manufacturer might respond to the request to adapt its factories to make ventilators.

If it is not persuaded that it can make easy money or if it thinks there are quicker or bigger profits to be made by continuing to make cars at a time when the public is frightened to use public transport, patients will die. If it holds back, waiting to see if there will be enough demand for ventilators to justify adapting its factories, patients will die. If it delays in the hope that ventilator shortages will drive up subsidies from a government fearful of the public backlash, patients will die. And if it makes ventilators on the cheap, to boost profits, without ensuring medical personnel oversee quality control, patients will die.

Survival rates will depend not on the common good, on our rallying to help those in need, on planning for the best outcome, but on the vagaries of the market. And not only on the market, but on faulty, human perceptions of what constitute market forces.

Survival of the fittest

If this were not bad enough, Trump – in all his inflated vanity – is showing how that profit-motive can be extended from the business world he knows so intimately to the cynical political one he has been gradually mastering. According to reports, behind the scenes he has been chasing after a silver bullet. He is speaking to international pharmaceutical companies to find one close to developing a vaccine so the United States can buy exclusive rights to it.

Reports suggest that he wants to offer the vaccine exclusively to the US public, in what would amount to the ultimate vote-winner in a re-election year. This would be the nadir of the dog-eat-dog philosophy – the survival of the fittest, the market decides worldview – we have been encouraged to worship over the past four decades. It is how people behave when they are denied a wider society to which they are responsible and which is responsible for them.

But even should Trump eventually deign to let other countries enjoy the benefits of his privatised vaccine, this will not be about helping mankind, about the greater good. It will be about Trump the businessman-president turning a tidy profit for the US on the back of other’s desperation and suffering, as well as marketing himself a political hero on the global stage.

Or, more likely, it will be yet another chance for the US to demonstrate its “humanitarian” credentials, rewarding “good” countries by giving them access to the vaccine, while denying “bad” countries like Russia the right to protect their citizens.

Obscenely stunted worldview

It will be a perfect illustration on the global stage – and in bold technicolour – of how the American way of marketing health works. This is what happens when health is treated not as a public good but as a commodity to be bought, as a privilege to incentivise the workforce, as a measure of who is successful and who is unsuccessful.

The US, by far the richest country on the planet, has a dysfunctional health care system not because it cannot afford a good one, but because its political worldview is so obscenely stunted by the worship of wealth that it refuses to acknowledge the communal good, to respect the common wealth of a healthy society.

The US health system is by far the most expensive in the world, but also the most inefficient. The vast bulk of “health spending” does not contribute to healing the sick but enriches a health industry of pharmaceutical corporations and health insurance companies.

Analysts describe a third of all US health spending – $765 billion a year – as “wasted”. But “waste” is a euphemism. In fact, it is money stuffed into the pockets of corporations calling themselves the health industry as they defraud the common wealth of US citizens. And the fraudulence is all the greater because despite this enormous expenditure more than one in 10 US citizens has no meaningful health cover.

As never before, coronavirus will bring into focus the depraved inefficiency of this system – the model of profit-driven health care, of market forces that look out for the short-term interests of business, not the long-term interests of us all.

There are alternatives. Right now, Americans are being offered a choice between a democratic socialist, Bernie Sanders, who champions health care as a right because it is a common good, and a Democratic party boss, Joe Biden, who champions the business lobbies he depends on for funding and his political success. One is being marginalised and vilified as a threat to the American way of life by a handful of corporations that own the US media, while the other is being propelled towards the Democratic nomination by those same corporations.

Coronavirus has an important, urgent lesson to teach us. The question is: are we ready yet to listen?

No one pays Jonathan to write these blog posts. If you appreciated it, or any of the others by him, please consider donating here

The post Jonathan Cook – A Lesson Coronavirus is about to Teach the World appeared first on Brave New Europe.

Read the whole story
clonedrad
1489 days ago
reply
Hood.
clonedrad
1489 days ago
*Good, gah.
Share this story
Delete

they call it the Skywalker saga

2 Shares

This will ruin the Rise of Skywalker for you.

So it’s all about the Skywalker family, right?

Shmi Skywalker – dies alone and in agony.

Padme Amidala, married into the family – gets force choked by the love of her life and dies in childbirth.

Anakin Skywalker – turns evil, slaughters children, dies saving his son.

Han Solo, married into the family – relationship with Leia falls apart, stabbed with a lightsaber by his son, dies.

Luke Skywalker – lives a miserable life, alone and unloved, dies alone, and I remind you Anakin died so that he might live.

Leia Organa – love with Han falls apart, dies trying to save her son…

Ben Solo – sacrifices himself in an act of great heroism to save the love of his life, dies, seemingly unmourned.

That’s it, that’s the family history

What… what the fuck?

Read the whole story
clonedrad
1575 days ago
reply
Share this story
Delete

Ten Tips For Doing Open Science

1 Comment

Science is the quintessential public good. It’s an iterative process in which new knowledge builds on previous knowledge. For this process to work, science needs to be ‘open’. Both the results and methods of scientific research need to be freely available for all. The open science movement is trying to make this a reality.

In this post, I share some tips for doing open science. If you’re an active researcher, these are tips you can use in your own research. If you’re not a researcher but are interested in science, these are tips that will help you judge the openness of scientific research.

The iceberg

Scientific research is like an iceberg. The published paper is the tip of the iceberg — a summary of the research you’ve done. But this summary is a small part of the actual research. Below the surface lurks the bulk of the iceberg — the data and analysis that underpin your results. For science to progress, fellow researchers need access to both the tip of the iceberg (your paper) and the subsurface bulk (the data and analysis).

Making your paper open access is the easy part of doing open science. First, make sure you upload the preprint to an online repository. Next, try to publish open access. If this is too expensive, you can always self archive your published paper after the fact.

The more difficult part of doing open science is making your data and methods available to all. This takes effort. It means you have to design your research flow with sharing in mind. Think of the difference between how you plan and write a scientific paper versus how you plan and write a note to yourself. The note takes far less effort because it only needs to be intelligible to you. The scientific paper, in contrast, needs to be intelligible to your peers.

The same principle applies to publishing your data and analysis. If you want your methods to be intelligible to others, you need to plan your analysis just like you would plan your paper. It needs to be designed for sharing from the outset.

With that in mind, here are 10 tips for making this process as pain free as possible.

1. Upload your data and analysis to a repository

How should you grant access to your data and analysis? One way is to do it manually. At the bottom of your paper you write: “Data and code are available by request”. If someone wants your supplementary material, they email you and you send it to them.

This is an easy way to share your analysis. The problem, though, is that you’re a scientist, not a data curator. Unless you’re diligent about preserving your work, it will get misplaced or lost over time.

Here’s a concrete example.

I’ve done a lot of research on hierarchy within firms. As part of this work, I’ve contacted fellow researchers and asked them to share their data and analysis with me. In one case, the researchers were happy to share their data … except they couldn’t find it! The work had been done in the 1990s. In the proceeding 25 years the original data had been lost. And I can’t blame these researchers. Do you know where your 1990s computer files are? I don’t.

The solution to this problem is to upload your data and analysis to an online repository. These repositories are professional curators, so the hope is that the data will be available long into the future.

There are now many scientific repositories (here’s a list). My favourite repository is the Open Science Framework (OSF). There are a few things I like about the OSF. First, as the name suggests, the Open Science Framework is committed to open science.

Second the OSF has a great preprint service. So you can write a paper, upload the preprint to OSF and then link this paper to your data repository. Here, for instance, is the preprint and supplementary materials for a recent paper of mine.

Third, the OSF has version control. So as your research evolves, you can upload revised versions of your paper and analysis. As you update things, the OSF keeps the older versions available.

Fourth, the OSF allows you to link projects. Suppose you’re working on a big research project that has many subprojects. The OSF allows you to link all these things together. For an example of this linking, check out the replication projects sponsored by the OSF. In these projects, many researchers work independently to replicate published research. When finished, the researchers upload their data and methods to the OSF and link it together in a single project. There are now replication projects in psychology, cancer biology, economics, and the social sciences.

Another advantage of uploading your materials to a repository like OSF is that you get a DOI for the resulting project. This means that your supplementary materials are citable. So if a researcher builds off of your analysis, they can cite your supplementary work. If you put in the effort to do open science, you might as well get credit for it.

2. Link your paper and analysis from the beginning

Putting your data and analysis in a repository takes work. To make this process as pain free as possible, I recommend linking your analysis and paper from the outset.

To frame this process, I’ll start by telling you what not to do. When I first started uploading supplementary materials, it was a lot of work. The problem was that I hadn’t integrated this step into my research flow. As I did my research, I would accumulate a hodgepodge of files and folders spread out on my computer. From this hodgepodge, I’d pull things together to write my paper. When I finished writing, I’d manually merge all the research into a file that I’d upload as supplementary material. As you can imagine, this was a pain.

The solution, I realized, was to organize my research into projects. Each project is associated with a paper (or potential paper) and is designed from the outset to be uploaded to the OSF. So when I start new research, I create a folder with the project name, and then put two subfolders inside it: Paper and Supplementary Materials. The Paper folder contains the manuscript in progress. The Supplementary Materials folder contains all the data and analysis.

Having the manuscript and analysis linked from the start is helpful because the act of writing inevitably leads to new analysis. As a novice researcher, this always surprised to me. By now I expect it. When you write up your ideas, you get new ones that lead to new analysis. I like to keep track of these changes, and I do so by archiving the manuscript and analysis together. Each version of the manuscript has a corresponding version of the analysis that goes with it. When I’m finished the project (and paper), everything goes on OSF with no extra work.

3. Make a data analysis pipeline

For your Supplementary Materials to be useful, they need to be organized in a coherent way. I recommend creating a data analysis pipeline so that other researchers (and of course you) can follow your analysis.

Any type of analysis is going to have three basic steps:

  1. Clean the data
  2. Analyze the data
  3. Display the results in a table or figure

To make these steps coherent, I recommend dumping Excel and switching to a coding language like R or python. If you’re a committed Excel user, I understand your reticence to leave it behind. As a novice researcher, I fell in love with the simplicity of Excel. I could churn out simple analysis in a few minutes, and plot the results on the same page.

But as my research progressed, my love of Excel waned. As I started to analyze data with millions of entries, using Excel became painful. Try sorting 10 million lines of data in Excel and see how your computer reacts. Faced with frequent frozen screens of death, I switched to R and never looked back.

But even if you’re working with small data sets, I still recommend dumping Excel because it’s difficult to use it to implement a data analysis ‘pipeline’. A good pipeline needs to be automated. This way, when your data changes, you can easily update your results. The problem with Excel is that it mixes raw data, data cleaning, and analysis. When you use R (or comparable software) you can keep these steps separate, and automate them.

There are many ways to create a data pipeline, but here’s what I do. Inside my Supplementary Materials folder, I have a folder called Data in which I keep all the raw data for my analysis. I generally use a separate subfolder for each data set.

Suppose I work with data from the World Inequality Database (WID). Inside my Data folder, I put a subfolder called WID that contains the raw data from WID. If I need to clean the data, I make an R script called clean.R and put it in the WID folder. This script would output cleaned data called WID_clean.csv. Then I make an R script called analysis.R that manipulates this cleaned data and outputs the results as WID_results.csv. At the end of my data pipeline, I plot figures and create tables. I like to put all of my figures scripts in one folder called figures.

At the end of a large research project, I usually have dozens of scripts in my data pipeline. It can get a bit unwieldy to keep track of all this code, which brings me to my next tip.

4. Make a RUN ALL script

As your analysis gets more complicated, it becomes difficult to keep track of your data pipeline. For me, this came to a head when I was working on research that linked many different types of data. When I updated one part of the analysis, I’d have to manually update all the other parts. I’d go to each script in the pipeline and press RUN. This was annoying, not to mention difficult. I can’t tell you how many times I started writing up my results, only to find that a data update had failed to pass through the whole pipeline.

The solution is to create a RUN ALL script that executes your entire data pipeline. The RUN ALL script is useful for a few reasons. First, it’s a table of contents for your data pipeline. If you comment your RUN ALL code well, another researcher can read it and understand your data pipeline.

Second, your RUN ALL script helps you keep track of what you’ve done. You can see, in one file, how all the parts of your data pipeline fit together.

Third, your RUN ALL script automates your analysis, making updates easy. Change some of your analysis? Update some of the data? No problem. Just run your RUN ALL script and everything gets updated.

Lastly, the RUN ALL script makes it easy to debug your data pipeline. Just run the script and see where you get errors.

I usually write my RUN ALL script in Linux Bash, but that’s just a personal preference. Any language will do.

5. Use relative file paths

This is a technical point, but one that’s important when it comes to running your code on other computers. In your data pipeline, you’re going to tell the computer to look in many different directories. The quick and easy way to do this is to specify an absolute file path. For instance, if I had a folder called project on my desktop, the absolute pathway would be: /home/blair/Desktop.

When I first started making data pipelines, I’d use absolute pathways because they’re easy. Just copy the path from your file browser into your code and your done. The problem with doing this is that it means you can’t move your project. If, for instance, I moved my project folder off the desktop, all the code would break. That’s annoying for me. It also means that the code won’t work on anyone else’s computer, since their absolute file paths will always differ from mine.

The solution is to use relative file paths. You write your scripts so that the file paths are specified relative to the project folder. This way, no matter where your project folder lives (on your computer or anyone else’s) your code will work.

In R, I use the here package to deal with relative file paths. The command here() gets the file path for the R script you’re running. You can then specify any directory changes relative to this file path.

6. Automate your data downloads

Let’s face it, doing science can be slow. I can’t tell you how many times I’ve done some analysis, got a result, and then put this into a mental folder called “think about it later”. Two years down the road, I return to this research. But now the data is two years out of date and I need to update it.

Enter automated data downloads. If you’re working with an online database, I suggest including the data download in your data pipeline. So if I was working with data from the World Inequality Database (WID), I’d write a little script to download the WID data. When you do this, updating your research is a breeze.

When you return to the analysis two years later, you’ll automatically have the most up to date data. Moreover, suppose you publish the results and put the analysis in an online repository. Twenty years down the road, another researcher could rerun your analysis and see if the results hold for the twenty years since you published your paper.

7. Use open source software

This should go without saying, but if you want your research to be open, you need to use open source tools. Suppose, for instance, that you do all of your analysis using proprietary software like SAS, SPSS, Stata or Statistica. If another researcher wants to run your code, they have to pay for this software. Some of it (SAS, for instance) can be pricey. In contrast, open source tools like R and python are free.

Open source software also tends to have a stronger online community. If you have a coding problem with R or python, it’s almost guaranteed that someone else has had the same problem, and that there’s a solution on Stack Exchange. This will help you build your data pipeline. It will also help other researchers understand your code.

8. Comment your code

Most scientists (me included) have no formal training in coding. This means we commit the cardinal sin of commenting our code poorly.

When I write code for data analysis, my first impulse is to write it quickly with no comments. I want to see the results now, damn it! I have no time for comments! As I’ve matured, I’ve learned to resist this urge. The problem is that code without comments is unintelligible. I would discover this when I’d return to the analysis a few months later and have no idea what my code did.

The solution is to comment your code. Comments tell the reader what your code does. The best comments are usually high level. Don’t tell the reader that “this is a loop”. Write a comment that tells the reader what your code does and what the major steps are. Assume that the reader understands the basic syntax of the language.

For more information about how to document your code, check out Ten simple rules for documenting scientific software.

9. Automate your statistical tables

I suggest making statistical tables a part of your data pipeline. This may seem obvious, but it’s something I’ve only recently introduced into my data pipeline.

I use the R package xtable to generate tables. I do all of my analysis in R, and then have xtable output the finished table of statistics. This takes some effort to learn, but it will save you time in the long run. When you’re writing a paper, you inevitably update your analysis. After this update, I used to cut and paste the statistics into a table. Now I have everything automated. When I change my analysis, all the tables get updated automatically. In terms of open science, this automation also makes it clear to researchers how you derived your published statistics.

10. Use version control

Version control is something that all trained programmers use. It’s a way of backing up your code and tracking changes over time.

You should use version control for many reasons. The most obvious is that when you break your code, you need an easy way to revert to the last working version. Maybe you tweak your data pipeline, but in the process, your break it horribly. The results are gibberish. What do you do? If you haven’t used version control, you’re out of luck. You’ll have to rewrite your code (by memory) back to the last working version.

If you’ve used version control, reverting is easy. You literally click ‘revert’. There are many tools for doing this. The most popular is git. I use git through the GitKraken GUI.

Using version control means you periodically ‘commit’ your code to a repository (either on your computer, or online to something like github). Each time you commit, git will track the changes to your code. This could be code for a single script, or the code for your entire data pipeline.

Version control helps save you from disaster. It also provides a simple way to keep track of how your data pipeline has changed over time.

Learn open science by doing it

In this post, I’ve shared tips for doing open science that I’ve learned over the last few years. Some of these tips I learned by looking at other people’s work. But many of them I learned by trial and error.

If you’re interested in opening up your research methods, the best way to learn is to just do it. Plan to publish the data pipeline for your next paper. Hopefully the tips I’ve shared will help you. But I’m sure you’ll come up with tips of your own. When you do, please share them. Let’s make science open!





Read the whole story
clonedrad
1603 days ago
reply
Ash!
Share this story
Delete
Next Page of Stories