Understanding Capitalism Part V: Evolution of the American Economy

 By image - March 15, 2013

When the United States of America was founded in 1787 it was the most egalitarian Western nation in the world for citizens of European descent, indeed one of the most egalitarian major societies in all of human history. It was the relative equality of white American society that made America such an attractive place for European immigrants and a "land of opportunity". Understanding capitalism, its impact on economies in general, and its impact on America in particular, requires both a knowledge of American economic history and an understanding of economic theory. Let's first take a look at the history of the American economy, and then delve into economic theory as is relates to the history of the American economy and capitalism in general.

Overview of American Economic History

Colonial and Antebellum America

When America was founded noble aristocracies ruled Europe, and virtually every major nation on earth was dominated by a feudal system of some sort or another. In Japan, China, India, the Ottoman Empire, and across Europe, massive under-classes of property-less peasants were dominated by a relative few who owned and controlled the property of the nation. The Industrial Revolution was just beginning to dawn in England, but the economies of the world were still dominated by agriculture. As such, land was of course the most significant form of capital and virtually all of the land in every "civilized" nation was owned by a relative few, being split primarily among the nobility and religious institutions.

In the British colonies of America, however, the rule of aristocracies had never been solidly established. Though aristocrats coming to America from Europe enjoyed initial advantages in the "New World", the widespread availability of land and the small population meant that there was more than enough land for everyone who came to America from Europe to obtain their own land at little or no cost. During the early colonial period land could be obtained simply by "right of discovery", which meant that anyone could claim any land that was not already claimed by other Christians. This right was granted in the colonial charters issued by the English crown, and similar such charters were issued by other European nations for their colonies. When land was found that was inhabited by natives, the natives were either killed, driven off, or some trade agreement was made to ensure that the natives would leave the land and allow the Europeans to take it over. Property rights in the British colonies were still based on the English feudal system however. All of the land was technically owned by the king of England, and was considered on loan to the title holders. This made little difference in America in terms of land use, but understanding the feudal heritage of property right law is important for understanding the development of modern property rights. Under the feudal system labor was not recognized as imparting a right to property. This is because all property was owned by the king and titles were granted to a hierarchy of administrative lords who retained rights to the value produced using said property. Recognizing a right to property created by labor would have unraveled the entire feudal system, since the feudal hierarchy was effectively a hierarchy of non-laborers who derived all of their income via ownership or title to property upon which laborers worked and created value, which was all then owned by the title holders - ultimately the king.

As Thomas Paine later noted in The Rights of Man:

The aristocracy are not the farmers who work the land, and raise the produce, but are the mere consumers of the rent; and when compared with the active world, are the drones, a seraglio of males, who neither collect the honey nor form the hive, but exist only for lazy enjoyment.
- Thomas Paine; The Rights of Man, 1791

It was in opposition to this system of ownership that John Locke famously put forward his statements that the right to ownership of property was a product of labor.

[E]very man has a "property" in his own "person." This nobody has any right to but himself. The "labour" of his body and the "work" of his hands, we may say, are properly his. Whatsoever, then, he removes out of the state that Nature hath provided and left it in, he hath mixed his labour with it, and joined to it something that is his own, and thereby makes it his property. It being by him removed from the common state Nature placed it in, it hath by this labour something annexed to it that excludes the common right of other men. For this "labour" being the unquestionable property of the labourer, no man but he can have a right to what that is once joined to, at least where there is enough, and as good left in common for others.
- John Locke; Second Treaties on Civil Government, 1690

What happened in the British colonies of America, however, was that virtually all Europeans were able to become their own lords. While the feudal system of property rights technically remained the same, the fact that almost everyone (white males) owned their own capital meant that (white male) individuals did retain ownership to the value created by their own labor, but that right to ownership remained granted through property rights, not through a right of labor.

Estimates from the colonial period show that land was by far the dominant form of capital during the colonial period. It is estimated that as of 1774 land accounted for 56% of privately held wealth, with slaves accounting for 19%, livestock accounting for 9%, and tools & equipment accounting for 3%.

Ultimately land was granted in five ways within the colonies: via ownership shares in the companies which came to America, via headright grants (a set amount of land per person in a family), by purchase from the local governments, by squatters rights, and via special-purpose grants from local governments. The result of this was essentially that the most significant capital, land, was "free" or very cheap to acquire in the New World, which ultimately had the effect of bringing about the conditions whereby nearly every white male was able to own their own property, and to thus secure for themselves the right to ownership of the products of their own labor.

Not only was land cheap and easy to acquire in the New World, but importantly land owners had the right to do with the land whatever they wanted, with very few limits. However taxes had to be paid on land, which made simply holding large blocks of land and doing nothing with it generally unaffordable. This drove a more equal distribution of land, because it was generally only affordable to own as much land as one could actively extract revenue from. The overall effect of this practice was to encourage widespread family farming in colonial America, which was by design. The colonists, and indeed even the English Crown, wanted to encourage land use, not simply land acquisition for acquisition's sake.

Nevertheless, land speculation became quite popular in the colonies and remained very popular well after the founding of the country. Land speculation was the first major speculative market in America. Several factors made land speculation an almost no-lose proposition for the early inhabitants. Firstly, land could be acquired for next to nothing, and secondly the population was increasing rapidly, with new people arriving daily. Obviously this meant that people could come to America, acquire a large amount of land at little cost, hold onto it for a short while, and then sell it for significant profits.

All of these factors contributed to the fact that early Americans strongly associated private capital ownership with both individual freedom and equality. In America the right of individuals to effectively own land and to actually be able to use it in practice, as well as to run their own businesses largely free of the taxes and regulations imposed on businesses by the Crown in England (which were used to benefit the aristocracy), made individuals more equal than they were in Europe.

In Europe all property was owned by kings, and was administered via a feudal hierarchy of nobles who collected taxes from their subjects, not for the benefit of the public or for use in creating public goods, but rather as their form of private income. In Europe massive inequality was a product of consolidated property ownership, where a relatively small number of people owned and controlled all of the property. In Europe the "property owners" were the nobility and the church. As such, property rights in Europe were viewed completely differently than in America. In Europe property rights granted the nobility the right to property ownership, thereby disenfranchising the bulk of the population. In Europe workers worked on property owned by nobles and they had to pay a tax to the property owners for a portion of everything that they created. These taxes were the basis of the incomes of the nobility. The nobility generally did not have to work at all if they didn't want to and had few obligations to use their incomes for anything other than their personal enjoyment. Some aristocrats gave money to charity, some didn't. Some aristocrats used their resources to engage in scientific pursuits or to develop public works, other used it to have orgies and elaborate parties, and of course some did all of the above.

When Alexis de Tocqueville visited America in the early 1800s he was most strongly stuck by the relative social and economic equality that existed in the United States. He was so impressed by American equality in fact that it is the first thing he mentions in his classic work Democracy in America, and it is what he claims inspired him to write the book. Below is the introduction to Democracy in America, published in 1835:

Amongst the novel objects that attracted my attention during my stay in the United States, nothing struck me more forcibly than the general equality of conditions. I readily discovered the prodigious influence which this primary fact exercises on the whole course of society, by giving a certain direction to public opinion, and a certain tenor to the laws; by imparting new maxims to the governing powers, and peculiar habits to the governed. I speedily perceived that the influence of this fact extends far beyond the political character and the laws of the country, and that it has no less empire over civil society than over the Government; it creates opinions, engenders sentiments, suggests the ordinary practices of life, and modifies whatever it does not produce. The more I advanced in the study of American society, the more I perceived that the equality of conditions is the fundamental fact from which all others seem to be derived, and the central point at which all my observations constantly terminated. I then turned my thoughts to our own hemisphere, where I imagined that I discerned something analogous to the spectacle which the New World presented to me. I observed that the equality of conditions is daily progressing towards those extreme limits which it seems to have reached in the United States, and that the democracy which governs the American communities appears to be rapidly rising into power in Europe. I hence conceived the idea of the book which is now before the reader.

Tocqueville  goes on to state in other sections:

As soon as land was held on any other than a feudal tenure, and personal property began in its turn to confer influence and power, every improvement which was introduced in commerce or manufacture was a fresh element of the equality of conditions.

...

The social condition of the Americans is eminently democratic; this was its character at the foundation of the Colonies, and is still more strongly marked at the present day. I have stated in the preceding chapter that great equality existed among the emigrants who settled on the shores of New England. The germ of aristocracy was never planted in that part of the Union.

...

In America there are comparatively few who are rich enough to live without a profession. ... In America most of the rich men were formerly poor; most of those who now enjoy leisure were absorbed in business during their youth;...

...

America, then, exhibits in her social state a most extraordinary phenomenon. Men are there seen on a greater equality in point of fortune and intellect, or, in other words, more equal in their strength, than in any other country of the world, or in any age of which history has preserved the remembrance.

This is not to say that there was total equality in America, clearly there wasn't, and Democracy in America has been criticized for turning a largely blind eye to the role of slavery and other issues of inequality in America, but what was clear was that there was much greater equality in America among the free citizens than there was in Europe at the time, because in America virtually everyone was a property owner. There were still indentured servants in America, there were still people living in poverty in the cities, women generally couldn't own property, and of course there was slavery, but despite all these things, the level of equality was still much much greater than in Europe, or practically any other technologically developed society in history.

As of 1774 it is estimated that the wealthiest 10% of the population held roughly 42% of the wealth, with the greatest disparity being in the Southern colonies. Today, by comparison, it is estimated that the wealthiest 5% of the population holds roughly 64% of the wealth and the top 10% holds roughly 75% of the wealth.

At the time of the revolution roughly 85% of American colonists were farmers, and roughly 90% of free citizens worked for themselves or in family businesses. The remaining 10% were largely either paid free laborers typically working in New England in places like ship yards, apprentices working in a trade for a master craftsman, or indentured servants. Roughly 12% of the total colonial population were slaves, with about 30% of the population of the Southern colonies being slaves. The graph below shows the percentage of the American workforce dedicated to agriculture throughout the 19th century. As of 1860, just prior to the Civil War, roughly 14% of the American workforce was working in manufacturing, with the remaining portion of non-agricultural workers engaged in the services sector.


source: Stanley Lebergott, Manpower in Economic Growth: The American Records Since 1800; 1963, p. 510

Due to the relationship between gender and property ownership, a large portion of free laborers were women and children, because of course women and children either couldn't or didn't own property. Most manufacturing prior to the Civil War still took place in the home, but the portion of manufacturing taking place outside the home had grown steadily over time prior to industrialization, and became essentially the only means of manufacturing after industrialization. In the early 19th century what happened was women and girls who were seen as unneeded workers on family farms often left home to travel to towns and cities where they worked as paid laborers in increasingly mechanized collective manufacturing facilities. The dominant manufacturing industry in early America was textile production, based largely on the cotton supplied from the South, and it was in this industry that many women worked. Women typically had no income at all on the family farms (just food and boarding), so even though their incomes were low and they were often exploited by manufactures, manufacturing gave them at least some income and a level of independence. Male free laborers were almost entirely first generation immigrants who had not yet obtained any property of their own, or free non-whites. Prior to the Civil War as much as 40% of the non-agricultural workforce was comprised of women and children, while the free agricultural workforce was only 5% female. Female agricultural workers, other than slaves, were virtually all family members of male farm owners. Of the women in manufacturing, essentially all of them were unpropertied wage laborers, while most of the men in manufacturing were both owners and workers.


women in pre-industrial home-based manufacturing


women in early industrial textile manufacturing

At this point, free laborers had very few rights. Prior to 1820 people who didn't own property, in the form of land, home, or business, did not have the right to vote, and free laborers had very few legal protections regarding compensation for their labor. Workers could be fired without pay for work done, workers could be held responsible for the cost of supplies and for damages to capital, and if an employer went bankrupt or out of business (which was quite common) laborers generally had no expectation of being able to receive any owed compensation. Again, the relationship between labor and capital descended directly from the feudal system, where all rights were defined by property ownership because it was through property ownership that the hierarchy of the nobility was defined. The difference in America was that every male citizen was free to become a "feudal lord" and free laborers were free to move from lord to lord, or to become lords themselves, but the relationship between the propertied and the propertyless remained relatively unchanged.

While the widespread ownership of private property was a defining characteristic of the American condition, most of the land in America remained public property owned by states and the federal government until after the Civil War. As of the Civil War, roughly 65% of incorporated American land was still public domain, not privately held by anyone.

This public land was often used by individuals for private gain. The uses of the public lands included farming, fur trapping, hunting, mining, cattle grazing, and timber harvesting. In each of these cases, the use of public lands essentially acted as the use of shared capital, i.e. collectively held capital. Public lands did play an important role in providing a means for propertyless individuals to work for themselves and generate enough wealth to become property owners themselves. There were of course also many abuses of the public lands, with numerous schemes where individuals created false deeds to public lands and sold the land to individuals, in which cases sometimes the individuals retained rights to the land and sometimes they didn't. The public lands were also widely exploited by loggers and miners who would in some cases clear cut the land, reaping great profits, leaving destruction in their wake and paying no compensation to the state for their use of the resources.

In the 1820s laws regarding the use of private property were radically changed in America. English law had long established the right of private property owners (remember these were mostly the aristocracy at the time) to be free from any negative impacts on their property caused by the use of another person's private property. This granted private property owners prescriptive rights, meaning that their ownership granted them a right to the quality of their property and others could not do anything that world force unwanted externalities upon them. In practice this meant that in the case of two land owners next to one another, the one couldn't erect a building that would cast a shadow on the property of the other, the one couldn't erect a mill that would muddy the waters of the other down-stream, etc. Clearly this would stifle development, and the primary concern of early Americans was development, so prescriptive rights were largely eliminated in America and replaced with priority rights. Priority rights gave priority to the property owner to modify their property and use it as they wanted, even if it caused negative impacts on others. Under the older laws things like pollution were fundamentally illegal, under the new laws that swept across America pollution and negative impacts on others became the norm. The new laws largely removed the responsibility of property owners from having to compensate other property owners for the negative impacts that their use had upon them. This paved the way for the development of industries that produced noise, smoke, trash, etc., which consumed water and diverted rivers, and which had widespread negative impacts on others in general.

By granting property owners free right to negatively impact others through the use of their private property, this gave rise to the role of negative externalities as a driver of profits. Negative externalities are negative economic impacts that result from some action, the costs of which are born by someone other than those who caused the impact. This allows producers to produce goods more cheaply, because a portion of the cost of production is born by others. The producer reaps the full reward for the sale of their goods, but does not pay the full cost of production, pushing that cost off onto others, thus increasing the profit margins of the producer at a cost to others.

In the period from the American Revolution to the Civil War there was a general increase in economic inequality. This was most pronounced in the South, where slavery was practiced. Increases in inequality generally corresponded with the decline of family farming, growing concentrations of capital ownership (mostly in the form of land and slaves), and in the early rise of small factories. In the South the plantation system was resulting in growing concentrations of capital, whereby large-scale plantations were increasing in size and in their share of the economy. This was pushing small farmers out and leading to a growing number of free white laborers working on large plantations instead of owning their own farms. Likewise, the large plantations were largely self-sufficient and independent, housing their own machine shops and producing much of their own food and equipment, thus the growth of the plantations did not lead to overall economic expansion in the South the way that growing factories in the North lead to overall economic expansion by increasing demand for supporting good and services.


sugar plantation

With the early rise of small factories artisan labor was in decline. Income for self-employed artisans grew more slowly than others as more manufacturing was being done outside the home in small factories. In the period prior to the Civil War organized labor was essentially nonexistent in America. Unions were considered a form of conspiracy, and had been illegal under English law for centuries on grounds of conspiracy and treason, given that the unions were in effect organizations of unpropertied workers who attempted to "conspire" together to their benefit at the expense of their lords and ultimately the propertied interests of the king. The attitude toward unions as nefarious conspiracies against the interests of property holders remained in America, and in American law. Generally the laws of the United States did not recognize any right to ownership of property created by laborers prior to the Civil War, as any laws recognizing a right to property inherent in labor would have called the system of slavery into question as well. However, a few states passed some laws that eased the restrictions on unions making the formation of labor unions legal (within those states), however actions such as strikes and much in the way of exercise of any real power remained illegal.

From the founding of the country through to the presidency of FDR in the 1930s, the primary populist interests were those of the small farmers. In the period prior to the Civil War the major economic conflicts of interests were not between labor and capital owners (because there were so few free laborers who had any political rights to begin with) they were between the farmers, the bankers, and the merchants and manufacturers. This is key to understanding the development of many of America's economic institutions and laws, as well as American politics.

American populism is historically rooted in socially conservative farming culture, defined by small capital ownership and the defense of small capital owners against larger capital owners, especially against banking. America's small farmers developed a unique economic and political outlook that was very different from other economic and political platforms around the world. This is because American farmers did own their own property, unlike farmers in virtually all other countries, especially Europe. American farmers were socially conservative, and held tightly to longstanding Christian notions about interest and banking, namely that all usury (interest) was a form of theft and should be illegal.

What existed in America, essentially up until the post World War II era, was a powerful political block rooted in the farmers that was strongly socially conservative, anti-Semitic, anti-banking, anti-corporate, anti-free-market, anti-labor, and strongly reliant on the federal government for support and economic regulation. These populists viewed the east coast financial centers and industrialists with tremendous suspicion. They sought government control and regulation of prices, they were strongly opposed to government subsidies for corporations, they were opposed to the right of foreigners or corporations to own land in America, and they sought the nationalization of things like banks and infrastructure so that the federal government could set prices below "market value" for things like interest rates and transportation fees.

Industrialization

This brings us to major differences between the development of industrialized capitalism in America and Europe. Capital ownership was never widespread in Europe like it was in America. The Europeans all went essentially from feudal systems of government and property ownership directly to democratic systems of government and the development of industrialized economies, without a situation in which virtually all families owned their own capital. The bulk of the European population went from being serfs to being wage-laborers, whereas in America what happened is that there was a period of time where virtually all white families were capital owners. In Europe capital ownership started out highly concentrated under feudalism and then became slightly less concentrated under democratic industrialized capitalism, whereas in America capital ownership started out highly distributed, and became more concentrated under industrialized capitalism.

The difference in capital distribution between Europe and America prior to industrialization is key to understanding the differences in the development of European and American economic systems and political cultures. Indeed America is unlike any other country on earth in regard to its transition from an agricultural economy to an industrialized economy (which is sometimes used as the basis for the idea of "American Exceptionalism"). America is essentially the only country on earth that had an egalitarian society with widespread capital ownership prior to industrialization. Every other country, from European countries to Japan to China to India to Russia to the South American countries, already had highly concentrated wealth with all of the capital concentrated in the hands of a few at the time that they began developing industrialized economies (the Dutch being somewhat of an exception to this).

In Russia and China, obviously, industrialization was preceded by "Communist" revolutions. What the so-called "Communist" regimes did in these countries was they stripped the feudal property owners of their ownership and essentially made the state the owner of everything. Various types of collective farming systems were implemented with major land redistribution schemes. This had the impact of reducing economic inequality and creating more equal wealth distribution than had previously existed in those countries, but the basis for industrialization was radically different than in America. In these countries industrialization was a top down process.

In places like Japan industrialization proceeded again via a top down process, but in the case of Japan the process of industrialization was administered by the hierarchy of the imperial Japanese state, with the property rights of the aristocracy largely intact prior to World War II. Capital ownership in Japan became more widely distributed after World War II, but still never went through a phase of widespread individual capital ownership.

In Europe the process of industrialization was not as top down as it was in Japan, Russia or China, but it was also not as bottom up as it was in America either. Different countries industrialized at different rates in Europe. Obviously Britain was the first country to industrialize. Following the Napoleonic Wars and the series of political changes that took place in their wake, there was some redistribution of land and wealth that took place with the rise of democracies across Europe. France, Britain, and the Dutch industrialized in the 19th century / early 20th century through a more bottom-up process than, for example, Germany and certainly the Austro-Hungarians.

Prior to industrialization land was far and away the most important form of capital, with slaves being the second most important form of "capital" in the Americas, and indirectly in Europe, where there was little actual slavery, but where many of the products of slave labor in the Americas were destined. The radical shift with industrialization was that land ceased to be the most important type of capital, being replaced by "intellectual property" (patents, copyrights) and machinery. The whole feudal hierarchy was built on land ownership rights, so when land ceased to be the most important form of capital, and new types of capital, which the feudal lords did not own, came to create more revenue than land, the power of the feudal system and the old nobility was broken. This is not to say that land became worthless or that the feudal aristocracies were cast off and became nothing, they certainly did not, but industrialization provided the avenue for the development of a new political class which gained its wealth and power not from inheritance and taxes, but rather from the creation of new systems of manufacturing and new technologies.

With industrialization in Europe non-propertied peasants became non-propertied wage-laborers in the mines and factories. Some craftsmen became capitalists, but most craftsmen struggled to hold on to their livelihoods as they competed against the output of factories. In England, where industrialization began, the new capitalists, i.e. factory and mine owners, originated from a mix of feudal aristocrats, small merchants, and poor laborers. Aristocrats had obvious advantages in terms of capital ownership and the ability to back or start new commercial endeavors, however the aristocracy was more threatened by the rise of industrialization than they benefited from it. Small merchants, which had made up a small middle-class in feudal times, were able to use their modest property to build small factories, and then, in some cases, to expand these small factories into larger factories and to develop entire supply-chains, acquiring rights to resources such as coal, water power, cotton plantations, etc. Among the poor who became capitalists, most of them got their start by making machines and then developing patents for new machines. The patents then provided them with enough revenue to become factory owners themselves.

While the living conditions of the poor in industrializing nations arguably got worse, the increased productivity of industrialization meant that the percentage of the population that was poor was reduced and the size of the middle-class grew significantly. Living standards for the growing middle-class, in America and Europe, grew significantly during the early stages of the industrial revolution as the increased productive capacity produced many more goods at lower costs, making much more material wealth acquirable for average people.

But while industrialization appeared superficially the same in both America and Europe, there were significant differences beneath the surface.

In Europe the majority of the population prior to industrialization were unpropertied peasants, and most of the farming was performed on land owned by feudal lords. The manufacturing that took place in feudal Europe was dominated by large guilds; the middle-class was small and few people owned their own capital. Thus, with the rise of industrialization, capitalism, and democracy, European society was already largely divided into "haves" and "have-nots", already largely divided into the propertied and the propertyless.

In America, however, virtually every white family owned their own capital at the onset of industrialization. America was a nation of many small capital owners, mostly farmers, and America had a system of democracy prior to industrialization. Thus the political and ideological dynamic was radically different in American than it was in Europe. In America capital ownership was associated with economic enfranchisement; ownership of one's own capital was the means by which most of the citizens ensured that they kept the fruits of their own labor. In Europe the opposite was the case. In Europe, and most every other country, capital ownership was the means of disenfranchisement. Because capital ownership was concentrated in these places under feudalism, ownership of capital was the means by which the wealthy minority economically disenfranchised the non-propertied laboring majority. So with the onset of industrialization in America political power and ideological persuasion were divided primarily between  wealthy large capital owners and populist small capital owners, slaves and unpropertied laborers were politically insignificant. In Europe and elsewhere the political power was divided primarily between the large capital owners and the laborers who did not own capital at the onset of industrialization, since the "small capital owners" outside of America were only a minor group representing neither a large segment of the population nor a large amount of wealth and power. In America during early industrialization wage-laborers were disproportionately women, blacks, and "fresh off the boat" immigrants, while almost all native born white men were capital owners. In Europe wage-laborers and capital owners were of course more homogenous within a given country, so there were fewer inherent divisions between wage-laborers and capital owners other than class. This meant that class interests were in sharp focus in Europe, while class interests were heavily conflated with racial, ethnic, and gender issues in America.

The Gilded Age

With the conclusion of the Civil War roughly 15% of America's population changed overnight from being a legal form of capital to being citizens. Virtually all of the freed slaves immediately became unpropertied laborers, and they and their descendents would largely remain so for generations. This significantly increased the number of citizens who were non-capital owning workers, but due to the continued political repressions of blacks and conflicts between white workers and black workers, this influx of laborers failed to do much to strengthen the political power of workers in America. In fact in some ways it made political opposition to the rights of wage-laborers stronger due to efforts to continue to restrict the rights of blacks by restricting the rights of wage-laborers, which virtually all blacks were.

The period immediately following the conclusion of the Civil War, from 1870 to 1880, saw the fastest rate of economic growth in American history as industrialization and banking reforms took hold. From the end of the Civil War through to the 1920s what occurred in America was a rapid decline of individual capital ownership. This was a de facto consequence of industrialization, improvements in farming, and the rise of corporations. As farming became more efficient the prices of agricultural products dropped, in some cases dropping quite rapidly. From the end of the Civil War through the early part of the 20th century the United States produced an excess of food due to it's heritage as a farming nation and due to the still massive amount of land available for people to acquire cheaply or for free. Farming was still the simplest and most direct way for someone who didn't own their own capital to start their own business and establish themselves as a capital owner, which resulted in an excess of farmers, which resulted in the falling agricultural prices.

The excess of food, which then became a major export of the United States, also aided the process of industrialization. As farming became more efficient, fewer people were needed to work the farms, resulting in more and more children of farmers leaving their family farms and going into the cities to look for work. This produced an ample supply of wage-laborers to satisfy the demands of growing industry. Since women and children were the least in demand as workers on farms, a large portion of the growing number of wage-laborers were women and children, which means that wage-laborers were comprised largely of people who were unable to vote and had very little political power, these being women, children, and blacks. While blacks were technically allowed to vote, Jim Crow laws and other means of disenfranchisement meant that blacks had limited ballot access and were very weak politically.

With growing industrialization after the Civil War corporations became increasingly prominent, contributing significantly to the decline of individual capital ownership. Corporations are legal entities created by the government through which multiple individuals can come together to combine their capital to create a "collective individual". Over time these entities gained virtually all of the same rights as human citizens, in addition to limited liability and "eternal life". With the rise of industrial capitalism the collective power of combined capital led to increases in efficiency against which individual capital owners could not compete. This led to an increasing collectivization of capital ownership, largely via the entities of private corporations in capitalist countries.

The rise of modern corporations was indeed pioneered in America, despite the fact that many Americans, including some of the nation's founders, were highly skeptical of corporate power. As noted in a letter to George Morgan in 1816 discussing the fall of the British aristocracy, Thomas Jefferson saw in private corporations, which were many times weaker than they are today, the potential for the creation of a new American feudalism.

It ends, as might have been expected, in the ruin of its people, but this ruin will fall heaviest, as it ought to fall on that hereditary aristocracy which has for generations been preparing the catastrophe. I hope we shall take warning from the example and crush in it’s birth the aristocracy of our monied corporations which dare already to challenge our government to a trial of strength and bid defiance to the laws of our country.
Letter to George Morgan, November 12, 1816

In Europe corporations had existed for centuries, but their rights and duties were much more tightly restricted; they were typically only given charters for a finite period of time, and the issuing of a charter for incorporation was granted only rarely. Most corporations in the 18th and early 19th century were not-for-profit, with large numbers of them being educational institutions, like universities. Corporations were not even allowed in finance at all under English law, with the sole exception of the Bank of England, which, while initially privately owned during feudal times (by the nobility), was also highly regulated and controlled by the government.

In America, however, corporate law was left up to the states, and the states developed very liberal laws regarding corporations in efforts to attract capital to them. Indeed the Constitution of the United States contains no provisions that deal with incorporation, because the Constitution was written at a time when corporations were not very common and the overwhelming majority of property was held by individuals, not collective groups. The entire Constitution deals only with individual rights, and has no provision for collective rights. In order to address this problem, corporations came to be defined as individual persons by the mid 19th century. This was done in part because all contract law was based on relationships between individuals, so in order to meet the existing precedents of contract law the corporation was defined as a distinct individual, so that the corporation itself could enter into binding commercial contracts. What this did, however, was it essentially shielded the actual individual people who came together to form the corporation from much legal responsibility, putting the legal responsibilities on the corporate entity, not real humans.

As of the early 19th century in most states corporations were barred from making any political contributions, they were barred from owning stock in other corporations, they were barred from engaging in any activity, charitable or otherwise, outside of the specific purpose of their charter, corporate officers were not protected from liability for corporate acts, and corporations had to be accountable to the state legislators, who had to have access to their books and could, and did, revoke the articles of incorporations if a corporation was deemed not to be acting in the public interest or was exceeding the bounds of its charter. The general legal trend from the mid 19th century through to the beginning of the 20th century was to continuously remove these types of restrictions and grant more and more power to corporations. By the early 20th century corporations, not individuals, had become the dominant capital owners. The intellectual justification by elected officials and judges for granting increasingly more power to corporations and in providing government support to foster their growth, was that these entities, by the very nature of their size, were more efficient at production than smaller entities. Large corporations that owned vast amounts of capital were better able to produce more goods at lower costs, and thus, the argument went, granting them more power and allowing them to reap massive profits was an acceptable price to pay for the increased productivity that these corporations produced. Of course there were other reasons why elected officials in particular supported the interests of growing corporations, namely the financial support provided to them by those who sought their support, and of course ideological sympathy for the growth of powerful private institutions.

Between the Civil War and the beginning of the 20th century numerous powerful monopolies emerged and these monopolies were essentially given the blessing of government, with the belief that the economies of scale created by these monopolies would yield benefits that would exceed any negative consequences. There were an astounding number of corporate mergers around the turn of the 20th century. From 1895 to 1904 over 1,800 manufacturing businesses merged with rivals, with 30% of these resulting in new corporations that controlled over 70% of market share (today's legal definition of a monopoly). Even in cases where mergers didn't take place, collusion was a standard practice of the day. Because rapid industrialization resulted in business models that involved high fixed costs and an uncertain market, the intense competition of the early phases of industrialization made investment in capital highly risky and often unprofitable. As a result it was common for "competitors" to collude to fix prices in such as  way as to guarantee that all of the businesses would reap profits. Businesses at this time also colluded against organized labor and workers in general. They colluded both on the side of commodity price fixing and on the side of fixing the wages paid to workers in order to keep wages down.

A significant difference between the development of private enterprise in America and Europe at this point was that in America there were more laws against the formation of cartels than in Europe. Because of this, in Europe businesses tended to remain smaller and independently owned, but to form cooperative cartels in order to reap the benefits of larger organizations. In America however, since cartel agreements were largely illegal in private industry, this led to more mergers and consolidations, where companies simply merged to form larger organizations. The net result was more rapid consolidation of capital ownership in America than in Europe.

By the 1870s there were growing political pressures to regulate large corporations, particularly the railroads. By this time the railroad companies were the largest companies in the country, and they were rife with corruption and price fixing. The railroads were all technically private enterprises, but they had been granted many public advantages, such as land grants, rights of eminent domain, and many government subsidies. Complaints against the railroads for price fixing, both by passengers and shippers, were the most common, but the most effective political and legal pressure against the railroads came from the farmers, who had become dependent upon the railroads to ship their harvests to market. Yet at this time very little was done in the way of federal regulation of private enterprise. While the federal government did plenty to subsidize and assist the development of private enterprise, little was done to regulate it.

By the1890s there was widespread public outrage at the actions of "big business" and widespread fear of the growing power of large corporations to manipulate prices and to drive down wages, in addition to the growing understanding that corporate money ruled Washington, yet little was being done to address these concerns at the federal level.


The Bosses of the Senate 1889

In 1890 the Sherman Antitrust Act was passed. The Sherman Antitrust Act is still one of the most significant federal laws affecting businesses today, as it is effectively an anti-monopoly law. The law basically prohibits a corporation or corporations from engaging in behavior that significantly reduces marketplace competition. In other words, this law requires that competition exist with the exception of so-called "natural monopolies". Ironically, however, this law was first applied against unions to bust strikes, under the reasoning that the unions were operating a conspiracy to form a labor monopoly, thereby reducing "competition" in the labor market. Despite the passage of the Sherman Act, little was done to regulate private enterprise or to enforce the Sherman Act against corporations.

The Progressive Era

This all changed with the presidency of Theodore Roosevelt from 1901-1908, who took significant action to regulate "big business" at the federal level. The "trust busting" activates of Roosevelt consisted largely of breaking up large national corporations into smaller independently owned companies and forcing them to compete against one another. Roosevelt also presided over increased regulation of interstate commerce, especially in regard to the railroads.

What had happened in America from the end of the Civil War up to Roosevelt's presidency was that corporations and private enterprise had grown well beyond their original local scope. Prior to the Civil War there were practically no interstate businesses. There was interstate commerce, but it was almost entirely commerce between separate local businesses. After the Civil War, with the establishment of a National Banking System, the rise of industrialization, and the development of rail roads, there was an explosion of companies that operated across multiple states, with workers and capital and owners residing all across the country. In addition, the prior conditions which naturally led to markets populated by many small independent actors had given way to markets dominated by monopolies or small numbers of large corporations. The idea that business was not originally heavily regulated in America is a misnomer, there have always been significant business regulations in America and things like price controls, and the establishment of government backed monopoly enterprises for the public good, such as public utilities, etc. were in place from the founding of the country. The difference is that prior to the 20th century all of the regulation took place at the state level. Since virtually all businesses existed only at the state level prior to industrialization, and since businesses were naturally small and less powerful than the state governments, this wasn't a problem. When businesses expanded rapidly in size after the Civil War and quickly became more powerful than not just state governments, but in many ways more powerful even than the federal government as well, this is what prompted the widespread support for federal regulation of private enterprise.

The economic panic of 1907 demonstrated multiple fundamental problems with the American economy and paved the way for significant changes, including the creation of the Federal Reserve. The panic of 1907 was brought about by massive bank failures resulting from massively over-leveraged stock-market trading schemes in the largely unregulated New York Stock Exchange. Traders were borrowing astronomical sums of money in attempts to corner the market on stocks and drive their prices. In one such case the attempt of one borrower failed resulting in massive losses by the trader. Those losses were all on borrowed money, and the amount of money that was borrowed was so huge that the multiple banks which had lent the money immediately failed due to runs on the banks.

The stock market quickly lost 50% of its value and runs on the banks around the country led to multiple bank failures virtually overnight. This demonstrated a clear need for greater regulation of the stock exchanges and lending practices, as well as weaknesses with the banking system, but additional events would demonstrate not only the weakness of the government's ability to address these problems, but also the ways in which America's wealthy industrialists had become more powerful than the government.

As the crisis deepened, the government was effectively impotent to do anything about it. J.P. Morgan and John Rockefeller stepped in to address the problems by overseeing the liquidation of various banks, buying companies, and depositing reserves in remaining banks (as loans). Morgan and Rockefeller contacted other wealthy Americans and acquired pledges of massive loans to keep the American banking system propped up. While there was relief that these individuals did step in to avert greater crisis, it also demonstrated how weak the government was and how disorganized and unready the nation was to handle the realities of the modern economy.

Despite the reforms of the early Progressive era at the beginning of the 20th century, the general economic trend remained the same. While large national corporations had been broken up into smaller companies and forced to compete against each other, the trend was still toward larger aggregation of capital and higher concentration of capital ownership. The percentage of farmers and the self-employed continued to decline and the portion of the population who were wage-laborers increased as continued industrialization increased efficiency and made smaller business operations unable to compete with larger national or regional corporations.

image

After Teddy Roosevelt's reforms, and laws passed to strengthen the role of unions, there was a brief period prior to the end of World War I in 1918 when union membership increased and wages and working conditions improved. During this time economic inequality was on the decline, however after the 1917 Communist Revolution in Russia, and the conclusion of the Great War, the tide turned against organized labor, and along with decreasing industrial demand due to the conclusion of the war in addition to the return of men from over seas, labor's share of the national income began declining and economic inequality began to rise again. By the 1920s the old American economy grounded in farming and widespread individual capital ownership was essentially gone, with production now dominated by corporations employing wage-laborers.

image

The Roaring 20s

Prior to World War I the United States was a debtor nation, with the country being in almost continuous public and private debt to foreign countries since the day of its founding. However World War I changed this. All of America's foreign debts were eliminated via America's contributions to World War I and post-war pledges of reconstruction aid to Europe. This elimination of the debt via the war paved the way for a massive explosion of household and business credit in America, which helped to fuel the economic excesses of the 1920s and played a significant role in creating the economic bubble that burst in 1929, followed by the Great Depression.

The 1920s was the first time in American history that most major purchases were made using credit. During this time 75% of cars were purchased on credit, 70% of furniture, 75% of radios, and 80% of household appliances were all purchased on credit. This explosion of credit enabled an explosion of productive capacity, which allowed prices to stay fairly stable in the face of major expansions of the money supply, both in terms of real money supply created by the new Federal Reserve, and the virtual money supply created by all of the credit. Even though one might expect rising inflation, in fact inflation stayed under control due to rapidly expanding productive capacity, as industrialization blazed ahead at break-neck speed.  During the 1920s wages did continue to rise modestly in real terms overall, but with the decline of organized labor a larger and larger portion of the value created by corporations went to capital owners and executives instead of the workers. Industrialization was increasing efficiency so fast that the real economic gains were able to be shared by virtually everyone (who was white), though this was largely due to credit.

The real gains in productivity, the massive expansions of credit, in addition to the fact that a disproportionate share of the gains were going to capital owners, all contributed to the rapid rise of stock prices. The rapid rise of stock prices during the 1920s was largely a product of two things, the growing savings of the wealthy who were looking for something to do with their money to get a return on investment, and credit financed speculation. As was the case leading up to prior American stock market crashes, stocks were highly leveraged with major amounts of credit. Low interest rates and rapidly rising stock prices induced speculators to buy on credit amidst a widespread faith that prices would rise indefinitely, with stock brokers and lenders happy to facilitate the transactions for the short-term fees they generated. In 1900 roughly 1% of the American population owned stocks, and by 1929 over 10% of the population owned stock. Because of the fact that the unemployment rate remained very low, prices were stable, and wages were rising, the overwhelming majority of economists believed that America had entered a "new era" of economics, where government intervention had become unnecessary and prices, productivity, and profits would rise forever.

The Big Crash

The overwhelming majority of economists and public figures were touting the fundamental strength and soundness of the economy right up until the crash in late 1929, however this view was not universal. There were economists and scholars who did warn of an impending crash. In February of 1929 the newly created Federal Reserve, feeling that credit based speculation had gotten out of control, announced that it would no longer support bank loans for stock purchases. Famously, Roger Babson warned that, "Sooner or later a crash is coming, and it may be terrific ... factories will shut down ... men will be thrown out of work ... the vicious circle will get in full swing."

While most people focus on the stock market crash, the stock market crash did not cause the ensuing Great Depression, the crash was a symptom of the fundamental underlying economic problems, and the true depression did not take hold for over a year after the crash.

The reason for the overall economic crash, though much debated, was largely over-borrowing by consumers, who were unable to ultimately pay back their loans resulting in spending being unable to keep pace with productivity as the share of income going to workers declined. Thus, productive capacity had expanded at a higher rate than consumptive capacity. The massive expansion of credit served as a means to bridge the gap between productive capacity and consumptive capacity, but ultimately, since wage growth never caught up to productivity growth, the credit bubble burst and demand rapidly plummeted. By 1929, both the consumer markets and the stock market were highly leveraged on credit. By mid 1929 retail sales were in decline and by September stock prices had begun to fall. In October of 1929 rapid decline of stock prices was underway.

The crash of the stock market in 1929 didn't directly cause unemployment or factories to close. Again, the crash was a symptom not a cause. The cause was fundamental lack of aggregate demand. The economic crash started in America and quickly spread around the world. At the time of the economic downturn America was the largest consumer market in the world. Europe was still recovering from World War I, and no other countries were yet sufficiently industrialized or had sizable middle-classes. When demand collapsed in America due to the working-class not having enough income or wealth to consume the goods that were being produced, there were no other global consumers to step in to prop-up demand either, and thus sales rapidly declined, resulting in the closing of factories, resulting in growing unemployment, resulting in ever shrinking demand, resulting in more factories shutting down, resulting in retail stores shutting down, resulting in growing unemployment, resulting in people losing their homes, resulting in more loan defaults, resulting in shrinking demand, resulting in factories shutting down, resulting in growing unemployment, etc., etc., etc.

In the months and years following the crash of 1929 most people expected that the economy would improve and the stock market would gradually return to its peak levels. This was not to be. The unemployment rate went from 3.2% in 1929 to a high of 24.9% in 1933 when Franklin D. Roosevelt entered the presidency.

The Great Depression and the New Deal

The details of the Great Depression and the New Deal are complex and much debated. From a high level, however, what must be said is that FDR entered office in 1933 and died in office in 1945, a span of 12 years, and during that time Roosevelt maintained strong public support and unprecedented legislative power, which was used by the Roosevelt administration to significantly change the structure of the US economy and the role of the federal government in the regulation of the economy. In the short term, the most significant actions taken by the Roosevelt administration were the regulation of banks, requiring citizens to return all gold coins and gold certificates to the US Treasury, destruction of agricultural stocks in order to raise prices, and the creation of public works programs to provide jobs for the unemployed.

Over the long term the major impacts of the Roosevelt administration were increased regulation of the economy by the federal government, the creation of major social safety net programs, expansion of the federal tax base, and the adoption of a highly progressive income tax system. What the Roosevelt administration did not do was it did not fundamentally change the structure of the capitalist economy. The effect of the New Deal was to regulate capitalism, not to change its fundamental structure. FDR's policies were seen at the time, and in retrospect, as a means of mitigating the excesses of capitalism, while allowing the fundamentals of the system to remain unchanged. The stock market remained in place, the relationship between capital owners and wage-laborers remained in place, the rights of property ownership remained in place, system of private banking remained in place, industries were not nationalized, boards of directors stayed in place at corporations, profit motive remained the prime driver of  business, etc. yet restrictions were placed on all of these things. Banks remained private, but they had more rules they were required to follow. Profits from capital gains remained in place, but they were more heavily taxed. Wages remained largely determined by individuals within a labor market framework, but minimum wages were implemented and collective bargaining was officially sanctioned by the government.

It is important to note that the changes brought about by the Roosevelt administration were broadly supported by the American public. Indeed FDR was one of the most popular presidents of all time during his own presidency. At the time that FDR was in office the Democratic Party, of which he was a member, was still largely a Southern party, and it was a party heavily tied to farmers. The political support of American farmers and Southern social conservatives was essential in the passing of Roosevelt's New Deal agenda. Indeed this is why many of the benefits of the Roosevelt administration's programs went to rural Americans. Rural Americans received a disproportionate amount of the benefits of the public works and economic development programs of the New Deal era. Rural electrification, road building, dam building, farm subsidies, etc. all primarily benefited rural Americans and family farmers who, at this point in American history, while dwindling as a percentage of the population, still represented about 25% of the American population and were powerful politically. In 1933 only 10% of American farms had electric service and by 1940 over 90% did, nearly all of it provided by federal government programs.

The Roosevelt administration did make attempts at planned economy, through various government direct-employment programs, production targets for industries, financial incentives for targeted industries, etc., but the problem with much of this is that while much of the fundamental planning may have been sound, by the time these plans were implemented they had been so distorted by political  wrangling and horse-trading that priorities were often set more by the political bargaining of special interests than by sound economic policy.

During the 1930s there was a high degree of corporate consolidation. Business leaders and many economists placed the blame for the depression on excess competition, and thus sought to reduce restrictions on collusion and anti-competitive practices. The Roosevelt administration basically went along with this assessment and created agencies to regulate and oversee the consolidation of businesses and to allow greater collective bargaining among businesses, in other words to allow forms of collusion. The result was that, both due to the natural tendency of consolidation after an economic crash, and due to government policies that facilitated consolidation, capital ownership actually became much more highly concentrated during the 1930s than it was previously.

The net effect of the Great Depression on capital ownership was that the percentage of the population who owned stocks was dramatically reduced from the height of 1929, the corporations that did survive the crash generally got bigger as competitors went out of business or were consumed by mergers, investments in capital declined significantly, startup businesses declined, and patent applications declined, but not dramatically. The percentage of the population working for themselves did increase, but this was out of desperation not opportunity, and few of these self-employed people managed to amass any capital or develop meaningful businesses. Many of those who were "self-employed" out of desperation engaged in services or were small time merchants, with their primary asset remaining their manual labor. Overall, during the 1930s (and several decades beyond) both the size of corporations and government grew. Both government and large corporations accounted for an increasing share of employment over this period.


source: Historical Statistics, series D86; Michael Darby, Journal of Political Economy, Feb 1976, p. 8

The various public works programs implemented by the Roosevelt administration were programs for the unemployed, and as such, workers in those programs were still considered to be unemployed. Roughly 5% of the workforce was employed through public works programs throughout the duration of their implementation. The graph above shows the official unemployment rate in blue and the total unemployment rate when counting all those employed through the public works programs for the unemployed as employed.

The official unemployment rate did decline from 1933 to 1937, but rose again in 1938 after the Roosevelt administration, facing pressure from opponents and worried about the national debt, started cutting back on direct aid to the economy. By 1940 American production related to World War II had already begun, even though the US would not officially enter the war until December, 1941 after the attack on Pearl Harbor.

Income inequality fell only slightly during the 1930s and was not significantly reduced until after World War II. One of the reasons that unemployment remained high throughout the 1930s was that, despite significant government expenditures and works programs, consumer demand remained very low. Among those who were employed, the threat of losing one's job led them to put most of their excess income into savings, to save against the prospect of joblessness. Even though significant money was being saved, people greatly distrusted banks so relatively little of the money was saved in private banks, where it could have been lent out. Likewise, private banks had become highly risk averse and since interest rates were so low, they had little incentive to lend. The result was increased savings, but no corresponding increases in lending or investing. Consumer demand remained low, so there was still nothing to significantly drive production, thus there remained little reason to hire workers since excess productive capacity remained in the system.

The Wartime Economy

America's entry into World War II changed all of this by creating an external driver of significant demand. The American wartime economy of World War II can in some sense be thought of as the New Deal on steroids. The structuring of the government's role in the economy during the 1930s provided a strong framework through which the federal government was able to effectively take control of the American economy during World War II and command production through a centralized system. The wartime economy of the US during World War II is the closest that the country has ever come to a fully centrally planned economy.

During Word War II the Roosevelt administration created the National Defense Advisory Commission (NDAC), Nation Defense Research Council (NDRC), War Production Board (WPB), Office of Price Administration (OPA), and Office of War Mobilization (OWM). Many of the agencies created under the New Deal to deal with the economy were eliminated during World War II and supplanted by these new wartime production agencies. Through these and other agencies the federal government gained almost complete command over the entire economy. Virtually all prices were fixed or had caps set on them. These pricing caps were set out in huge manuals that were delivered to businesses, covering everything from steel to mayonnaise. Practically every material was rationed. Production and raw material extraction facilities were either built directly by the government when needed, or the government paid private companies to expand or modify their existing facilities for wartime production.


source: US Bureau of Economic Analysis, National Income and Production Accounts, Table 1.1 & Table 3.2

From 1939 to 1944 GDP practically tripled and unemployment was cut from roughly 18% down to around 1% entirely through the massive infusion of government spending and economic controls. What the war had done, essentially, was provide the political will to fully engage in a Keynesian government-directed economic stimulus program. While the New Deal programs of the 1930s seemed large at the time, they were smaller than what was being recommended and sought after by Roosevelt. While there was widespread popular support for the New Deal stimulus programs, there had also been strong and focused opposition largely from wealthy private citizens and from politicians and economists both for ideological reasons and out of a genuine belief that government spending and government programs would undermine the economy and prevent a recovery. There was serious concern during the 1930s about the federal deficit and the debt being accumulated by the government, and this made it difficult for the Roosevelt administration to engage in truly significant economic stimulus. What the war economy showed, however, was that, given enough control and enough spending, the government was capable of significantly stimulating the economy and creating full employment, at least in the short term.

After World War II many economists wondered if the Great Depression could have been ended much sooner had World War II level stimulus been applied in the early 1930s. This is a difficult question. Theoretically it would seem that the answer is yes, but it isn't quite that simple. It is not clear that it would have been possible to engage in the level of borrowing and deficit spending that was exercised during World War II during the 1930s, and that even if the money could have been borrowed it may not have been as easy to repay as it was after World War II. During the war the Europeans were obviously willing to lend virtually unlimited sums to America. In addition, the war made it politically possible to raise taxes to a degree that hadn't been possible prior to the war. Taxation came to be seen as a patriotic duty for national self defense during the war, and it was during World War II that the tax base was significantly expanded and tax rates were significantly increased. A significant amount of money was also raised from war bonds, purchased by American citizens, and people just wouldn't have bought bonds in that way for any reason other than war. Likewise, after the war it was easier for America to pay down the national debt because of agreements made during the conclusion of the war. Importantly, the war left America as the sole economic super-power in the world, a position it did not hold prior to the war, which made it much easier to repay the remaining debts.

So what we can say is that the degree of amassing and paying down of the debt that fueled the war time economy would not likely have been possible without the conditions of war. If, in theory, the political will existed in 1933 to engage in the same type of massive government effort that fueled the war time economy, and if the same tax increases had been put in place in 1933 that were in place in the 1940s, it is still not likely that the government would have even been able to borrow as much money as it did during the war because lenders wouldn't have been willing to lend it and citizens wouldn't have bought the bonds, and even if the money could have been borrowed, paying it back at that time would likely have been much more difficult than it was after the war. Nevertheless, the experience of the war economy does indicate that had significantly more government stimulus been applied during the 1930s, based on increased taxation and deficit spending, unemployment could have been reduced much more significantly and rapidly.

There was a very important difference between the 1930s and the war economy of the early 1940s however, and that was the relationship between government and the private sector. During the 1930s, while there was certainly cooperation with the private sector and attempts to aid even the captains of industry, by and large the relationship between government and the private sector was antagonistic. Roosevelt portrayed corporations and wealthy bankers and industrialists as the enemy of the people, and economic policy of the time sought largely to regulate business, not facilitate it.

When the war mobilization efforts ramped up, however, the opposite became the case. Roosevelt knew that the nation needed the manufacturing capacity and know-how that rested almost entirely in private hands in order to be able to meet the production needs of the war. As such, captains of industry were invited into the government and took key departmental and advisory positions. While the government had already been contracting heavily with private companies during the 1930s, the war effort brought about both a new level of contracting and a new urgency to it. This resulted in less focus on "fair" contracting and less focus on getting the best deal for the money, and more focus on just getting the job done. Prior to World War II military contracts were made through sealed bids, but with the war the system changed to negotiated bids, which involved a more personal and subjective system for awarding bids, bringing bidders and those awarding the bids into closer personal relations.

While FDR had strenuously denounced war profiteering and in fact enacted special taxes on war profits, the reality was that during this time of unprecedented economic mobilization war profits were inevitable. As Secretary of War Henry Stimson stated, "If you are going to try to go to war, or to prepare for war, in a capitalistic country, you have got to let business make money out of the process or business won't work."

The majority of the World War II production contracts went to large corporations, such as General Motors, Ford, and DuPont. The rate of profit for these companies in World War II was much less than during World War I, where rates of profit reached over 1,000%, but they were still well above the profits of the Depression Era. Furthermore there was little or no risk once contracts were established. General Motors was the leading contractor of World War II, receiving about 8% of the total value awarded.

Government contracts were awarded in a fairly concentrated manner. Two thirds of the Research & Development contracts went to 68 companies.  During the war the US government became the largest investor in American private business to the tune of $17 billion 1941 dollars. The business relations between certain private corporations, indeed specific individuals in those corporations, and the US government became very strong during this time and huge profits were reaped at little risk for companies and businessmen.


Production of B-24 Liberators at Ford plant in Detroit

It was at this time, for example, that the aircraft industry became the number one industry in the nation, having previously not been a major industry at all, and to this day the aviation industry has one of the strongest relationships with the federal government of any private industry.


Walter & Ann Beech overseeing WWII Beechcraft production line

During the war wages for average workers increased significantly, yet due to rationing, the fact that most production was geared toward the creation of war materials, and the fact that most of the men age 18-30 (and quite a few women as well) were enlisted in the military, there was relatively little to spend money on and fewer people to do the spending. The result was a massive increase in savings. Personal savings ballooned by over 700% from a national total of $4.5 billion in 1940 to $39 billion by 1944.

Rise of the Middle Class

While government spending declined after the war was over, it never returned to pre-war levels. Having brought such a large portion of the economy under government control during the war, both inertia and the positive results of government stimulus warranted that the role of government in the economy would be forever changed by the World War II experience. The single largest component of increased government spending after the war as compared to before the war continued to be military spending. Government spending on infrastructure projects, education, scientific research, and various forms of economic stimulus and regulation all increased after the war as well.

When World War II ended the United States was uniquely positioned for massive economic growth. Because of America's role in the liberation of Europe, and due to the American role in rebuilding post-war Europe and Japan, much of the debt that had been accumulated prior to and during the war was quickly either forgiven or paid off. At the same time, the economies and societies of Europe and Asia were shattered. The United States, along with Canada and Australia, was one of the few countries to come out of World War II with increased productive capacity and an improved economic climate. Compared to the United States, however, Canada's and Australia's populations and economies were (and are) relatively small.

The rural electrification and modernization programs of the 1930s set the stage for massive improvements in farming efficiency, so after the war the United States was able to supply huge amounts to food to recovering Europe and Asia, while the number of farmers as a percentage of the population continued to decline, and domestic food prices remained low. GDP declined slightly immediately after the war, but even during the period of declining overall GDP, production of consumer goods exploded as production shifted from war material to consumer goods. The massive savings that Americans had built up during the Great Depression and the war fueled mushrooming consumer demand and provided a solid base for business investment.

In the 1940s and 1950s manufacturing grew significantly as a percentage of the economy, and virtually all of the growth in manufacturing took place through large corporations. During the 1940s and 1950s the percentage of the American population working for large corporations grew significantly, with most of this growth taking place in the manufacturing sector. At this time the largest corporations in America were manufacturing, food processing, and raw materials corporations, followed by media corporations. While there were a few large retail corporations, such as Woolworths, Sears, and Macy's, by and large the retail, hospitality, and food services (restaurants) industries were still dominated by small businesses and individual proprietors. The financial sector, heavily regulated and decentralized at this point, was relatively small as well. With the dominance of the manufacturing sector, manufacturers drove the markets, being able to set prices, and determine trends and distribution channels.

The dominance of manufacturing arose because industrialization, by its very nature, was centered on manufacturing, but also importantly, government subsidies during the war effort were concentrated almost entirely on manufacturing, thus it was the manufacturing sector that received almost all of the World War II stimulus and benefits. It was also at this time, however, that farming began its transition to an industrial basis. Prior to World War II almost all farming was done by relatively low-tech family farmers. However, due to the modernization programs of the 1930s, which brought power and paved roads to the farmlands, and due to the increased need for food production during the war effort, and due to the increased capital costs of adopting the most modern farming equipment (which many individual farmers were unable to afford), corporations began increasingly moving into the farm sector beginning in the 1940s.


source: http://www.ers.usda.gov/publications/arei/eib16/Chapter1/1.3/


source: http://www.usda.gov/factbook/chapter3.htm

After World War II significant consolidation took place in farming. Corporations and wealthy families bought up and bought out many smaller farmers, resulting in far fewer, but much larger, farm properties. This process of consolidation and mechanization, subsidized by the federal government through New Deal policies, led to dramatic increases in output while the percentage of the population engaged in farming rapidly declined.

America's steel industry, which was the backbone of American industrial production prior to and during World War II, began its decline almost immediately after the war. The decline of American steel was based on several instructive factors. During the war American steel companies had become dependent on heavy government subsidies, and when the war ended steel producers lobbied strongly to have those subsidies continue, however, the Eisenhower administration rejected those demands and withdrew federal subsidies from the industry. At the same time, it was American policy to provide aid to Europe and Japan to help them rebuild after the war, and so in fact the American government was subsidizing the European and Japanese steel industries during the late 1940s and 1950s.

More important, however, is the fact that the American steel industry was heavily invested in old technology and practices. The American steel industry was the largest and most advanced in the world prior to the war, having developed many new technologies and practices, but having invested heavily in those technologies it was then reluctant to adopt new ones. American steel manufacturers did invest heavily in new equipment and expansion after the war, but they did so by investing entirely in old technology, with little aid from the government.

After World War II the European and Japanese steel industries were effectively starting all over again from scratch, and with significant investment from the American government, they invested heavily in the newest technologies and techniques, which included new types of furnaces and new casting methods. While the US had been the dominant steel producer in the world from the late 19th century through to World War II, and provided almost 50% of global steel output in 1950, by 1960 the US produced only 25% of global stock and continued to decline after that.

This was a pattern that would play out many more times in American manufacturing. The United States would often be the first to develop new technologies or production systems, but would then invest heavily in them and stick with them for long periods of time, while producers in other countries would later adopt incremental advances over American technologies and processes. American production had, and continues to display, a pattern of being a leader in making big advances, followed by periods of complacency and stagnation, as opposed to European and Asian producers who display a pattern of more incremental and continuous advancement over time.

Nevertheless, American manufacturing and economic innovation provided the basis for broadly shared economic growth. The broad sharing of income and benefits enjoyed by the white middle-class during the post-war period was heavily influenced by unionization (unions at this time often discriminated against non-whites), which reached its peak during this time. Unionization was most prevalent in manufacturing, and so it is no coincidence that unionization rates peaked at around the same time that manufacturing peaked as a percentage of the economy in America. Through a combination of government policies, market conditions, public advocacy and union influence, working conditions, wages, health care benefits, and retirement benefits all became much more equitably distributed than any time in American history for white Americans during the decades immediately following the conclusion of World War II. While prosperity was widespread among whites, poverty remained endemic among blacks and Hispanics, with around 50% of blacks living in poverty as of 1950, compared to only 15% of whites. However, by the end of the 1960s African American poverty had been significantly reduced (though it remained high), in large part due to federal programs such as president Lyndon Johnson's War on Poverty and civil rights legislation.

image

The period after the war through the end of the 1960s saw the rise of robust white middle-class American consumerism. This consumerism was enabled largely by the solid savings which Americans had built up during the war-time economy, the increased productive capacity of the economy which was built up during the war through government subsidies, broadly shared income, and government policies that heavily subsidized the white middle-class. Consumerism was also heavily driven by massive increases in corporate advertising, especially advertising directed at women and children, largely due to the widespread acquisition of televisions by American families after the war. Prior to the war only the richest families owned television sets and there was very little programming, but by the end of the 1950s virtually every white home had at least one television set and there was a steady stream of programming provided by the nation's three corporate broadcasters.

Prelude to Change

The post-war economic expansion was an interesting time when income distribution was relatively egalitarian but capital ownership continued to be consolidated. While unions and policies were able to drive equitable sharing of the fruits of production in the short term, individual capital ownership rates continued to decline throughout the 1950s and 60s. There was a modest increase in the percentage of Americans owning stock during the 50s and 60s, but many Americans remained wary of the stock market after the Great Depression. Likewise, the portion of national income going to labor reached its peak during the 1940s  and 50s and the rise of Social Security, and generous pension plans available to most corporate employees, provided a new level of long-term economic security without direct capital ownership. While stock ownership did increase modestly after the war, the decline in direct individual capital ownership more than off-set the modest increases in stock ownership. As mentioned earlier, this decline was most pronounced in the farm sector, but it occurred across the economy as a whole as more people became professionals and went to work for corporations.

While the 1950s and 60s were marked by the dominance of manufacturing corporations, large national retail and hospitality corporations began rising in the 1970s. The 1970s was also a time when new media and technology corporations began their rise as well, heavily impacted by new computer and satellite technology, much of which had been pioneered through government funded projects (largely military and space) during the 1950s and 1960s.

Overall, however, the 1970s are most widely noted as a period of general global economic "stagnation", particularly in America. The American economy was heavily impacted by the oil crises of 1973 and 1979, both of which contributed to high levels of inflation and rising unemployment. In 1973 OPEC (The Organization of Oil Exporting Countries) engaged in an oil embargo of the United Stated and other Western countries, including Japan, due to Western material support for Israel during the Yom Kippur War. As a result, the Nixon administration rationed petroleum fuels across the nation to deal with the significant shortages.

Not only did the energy crises of 1973 and 1979 have general impacts on the economy in terms of inflation, etc., but they also highlighted problems with American manufacturing and products. By the 1970s America faced significant foreign competition in manufacturing, primarily from Japan, but also from Europe. European and Japanese manufacturing processes had become generally more advanced than American processes, particularly in the auto industry, but also in electronics. In addition, Japanese and European car makers produced more fuel efficient cars than American auto makers, in part due to more stringent fuel economy standards in those countries. As a result foreign manufacturers, particularly the Japanese, gained significant market share in the United States during the 1970s, especially in the auto industry.

In 1960 foreign imports only accounted for about 5% of the cars sold in America, however by 1980 that figure had risen to almost 27%. While the United Stated produced 49% of all cars in the world in 1960, that number had fallen to 21% by 1980, all of which was a result of increasing foreign production as the European and Japanese economies cemented their recoveries from World War II, while American production levels remained relatively unchanged throughout the 1960s and 70s.

The 1970s was a time of increasing challenges for American manufacturers, which resulted in declining profits. The American economy remained highly domestic. The overwhelmingly dominant consumer market for essentially all American companies was the American market, and the majority of products sold in America were produced in America. The challenges American manufactures faced were increasing competition from foreign manufacturers (primarily Japan), the increasing power of growing retail corporations which had stronger negotiating power, significant demands from labor unions resulting in multiple large strikes and walkouts, and of course the challenges posed by inflation and high fuel prices, which both increased the costs of production and reduced consumer demand.


source: http://www.hinduonnet.com/businessline/2001/07/28/stories/042820ma.htm

In response to the significant challenges faced by the American economy as a whole during the 1970s, particularly those faced by American manufacturing which had been the backbone of the American economy since the early 1900s, significant support grew for major economic reforms. The most significant of these reforms were those that fall under the banner of deregulation. Virtually all of the economic regulations which had been put into place during the 1930s and 1940s remained in place up until the 1970s, with new regulations being put into place during the 1950s and 1960s as well.

Deregulation efforts began under the Nixon administration in the early 1970s, with an initial focus on the transportation sector, primarily railroads, which had been heavily regulated since the 1880s. Deregulation allowed railroads greater freedom to set prices and allowed railroad corporations more freedom to acquire competitors and more freedom in determining employee compensation. Railroad deregulation was followed by airline and trucking deregulation under the Carter administration in the late 1970s. The initial effect of deregulation in these industries was reduced barriers to entry for competitors and higher pricing competition. This initially resulted in an increase in the number of operators in these industries and reduced profits as a result of higher competition. Generally, however, after a period of increased competition, these industries reconsolidated and profits rose again, often at a cost to workers, who, over the long run, saw their incomes as a share of total revenue decline in these industries after deregulation.

The 1970s was also a period when support for significant financial deregulation began to build as well, though financial deregulation would not become law until the 1980s and 1990s. The banking industry changed dramatically during the 1970s, with these changes becoming a significant driver for deregulation. A combination of advances in technology and processes as well as accumulation of assets by large corporations, and the demand for national banking by large national and international corporations, provided significant impetus for the growth of large banks. America had historically been a national of small banks going all the way back to the country's founding, in large part due to popular suspicions of banking and laws that forced banks to remain small and regional.

The development of computer technology, however, significantly changed banking. In the 1970s business computing was dominated by large mainframe systems, and these systems could only be afforded by large banks. Computing became a significant driver of efficiencies through economies of scale in banking. Prior to computing economies of scale were not significant in banking, and thus a banking system dominated by small regional banks was close to optimal, however as computing was adopted this made larger national banks more competitive, but during the 1970s regulations prevented the rise of such banks. So, this contributed to significant pressure to deregulate the financial industry to allow larger financial corporations to take advantage of economies of scale. Likewise, the growing corporate banking customers wanted larger banks to be able to deal with their growing assets as well and to make operation on a national and international level easier, thus there was significant growing support for deregulation of the financial industry across the business community during the 1970s.

The 1970s introduced the beginning of the decline of the pension system as well. IRS code 401(k), which provides for tax exemption on income deferred into retirement accounts, was introduced in 1978 becoming effective in 1980. The 401(k) tax exemption was initially introduced specifically as a benefit for high income receivers, however it was soon amended to encourage inclusion of those with average incomes. By the mid 1980s many large companies had begun shifting their retirement benefit plans from defined benefit pension plans to much less costly 401(k) retirement plans.

The 1970s also saw significant increases in women entering the workforce. While the women's movement of the 1960s certainly had some effect in terms of making it more acceptable for women to enter the workplace, and improved the working conditions and pay for women, the primary drivers of women entering the workplace were purely economic. There are several factors that drove women into the workplace in the 1970s, and continued to drive women into the workplace long after that.

One of the most direct drivers in the 1970s was actually the growing unemployment rate. Because most households had only one worker, and that worker was male, when husbands lost their jobs in the 1970s one of the responses to this was for wives to go to work to make ends meet while their husbands were unemployed. In many cases, wives then remained working even after their husbands found new employment. Even though unemployment was high and men had difficulty finding work, women were still often able to take part-time and lower paying positions that, while not able to fully offset their husbands loss of employment, still clearly helped. Furthermore, women were able to work even while their husbands drew unemployment benefits.

But there were even more fundamental drivers of the increasing rates of women in the workforce in the 1970s as well. First of all we have to recognize that women have always participated heavily in the "workforce", but the "workforce" underwent a dramatic change due to industrialization and the rise of capitalism in the 20th century. In fact, the 1950s was an anomaly in terms of women's workforce participation, during which women's workforce participation was unusually low. Prior to industrialization virtually all work took place at home. During this time women were major contributors to the work that took place at home, both in terms of domestic chores and "business" output. In the early phases of industrialization women were likewise major components of the employed workforce, largely because women were not property owners, and thus they comprised a significant portion of the wage laborers. Women who didn't work outside the home during this time were virtually all wives who spent most of their time doing domestic chores and raising the children, at a time when children were not yet enrolled in full time schooling, and/or they worked at home for the remaining home-based businesses. Women, of course, were also a huge segment of the war-time labor force in the 1940s, since huge numbers of the men were enlisted in the military and birth rates had been low during the 1930s and 1940s, resulting in the fact that there were few children to take care of at home anyway.

During the 1950s, the baby boom and the widespread acquisition of suburban single family homes, in addition to the influx of male workers, the high accumulated savings, and the strong economy, meant that families were economically able to prosper with only one worker and there were significant household chores to be done, in addition to raising all of the children of the baby boom. However, by the 1950s the home had fully ceased, for the first time in history, to be a place of production. Home based businesses had become virtually nonexistent by the 1950s (outside of farming, which was in sharp decline), and so by this point essentially all production took place outside the home.

By the end of the 1960s technological modernization meant that household chores had been significantly reduced to the point that they no longer required anything more than a few hours a week to fulfill, and the baby boom was coming to an end, meaning that there were simply fewer children to take care of. Quite simply, by the 1970s there just wasn't anything productive to do at home anymore. Women had always been productive, there was never a time when half the adult population was just sitting at home idly doing nothing, but what had happened is that by the 1970s the home itself was no longer a productive place. The home had historically been a place of productivity, but with the rise of industrialized capitalism corporations were now the locus of productivity. Thus, women simply had no reason to remain at home anymore because production itself had now become external to the home.

The fact that homes were no longer productive places and that by the 1970s birth rates were in decline and all children were enrolled in full time schooling, combined with stagnating or declining incomes for 90% of the population, meant that there was no good reason for women to stay at home where they were unproductive, and there were very strong economic reasons to seek work where the work was, which was outside the home as official members of the workforce.

As the 1970s came to a close, "stagflation" was at its peak. The 1979 oil embargo was in full swing and unemployment, inflation, and the national debt were on the rise while incomes remained stagnant and corporate profits declines.

The "Reagan Revolution"

The economic and social conditions of the late 1970s brought about a desire for change in America to which conservative presidential candidate Ronald Reagan appealed. In addition to campaigning on a conservative social platform, Ronald Reagan, who had a long public career campaigning against federal programs and regulations, decried the economic conditions of the time and, among other things, promised to balance the federal budget, reduce the national debt, fight inflation, and deregulate the economy, all while lowering taxes. Reagan's espoused economic platform, formally called supply-side economics, came to be known as "Reaganomics", famously called "voodoo economics" by his then opponent for the Republican nomination, George H.W. Bush, who later became his Vice President.

By the time that Ronald Reagan took office in 1981 the oil crisis of 1979 was subsiding and Paul Volker, the Federal Reserve chairman appointed by president Carter, was beginning to get inflation under control. Indeed, like the taming of inflation, many of the economic trends associated with the so-called Reagan Revolution were actually well under way well before Ronald Reagan ever stepped into office.

When looking at the chart below we can see that there are definite relationships between the rate of inflation and the unemployment rate from the late 1960s through to the 1990s. What we see is that increases and decreases in the rate of inflation are reflected in the unemployment rate roughly 1 to 2 years later.


sources: http://www.usgovernmentspending.com/
http://www.bls.gov/
http://www.usinflationcalculator.com/

While Ronald Reagan's policies received much of the blame for rising unemployment during the first 2 years of his presidency, and much of the credit for the later drop, the reality is that inflation was a key driver of unemployment rates during this time, and inflation was brought under control both by the policies of President Carter's Federal Reserve chairman Paul Volker and, more importantly, by the ending of the 1979 oil crisis, both of which had very little to do with the polices of Ronald Reagan.

Something else that we notice in the chart above, however, is that the federal deficit continued to rise under Reagan. There is, of course, a relationship between the federal deficit as a percentage of GDP and unemployment, in large part because increases in unemployment coincide with decreases in GDP, thus the deficit as a percentage of GDP will naturally rise when GDP goes down, even if the deficit in actual dollars stays the same. Not only that, but as unemployment goes up, typically tax collections go down, thus also contributing to the deficit, even if spending stays the same.

Nevertheless, the chart above reflects another widely acknowledged fact of the Reagan presidency, which is that federal deficits rose under the Reagan presidency and remained higher, even as unemployment dropped. We can see that the federal deficit was still significantly higher in 1989 than it was in either 1979 or 1973, even though the unemployment rate was comparable to those years.

Due to the extremely effective messaging of the Reagan administration, the real impacts of Reagan's policies were often at odds with perceptions. To this day the effects of Reagan's policies in regard to reducing unemployment and the economic expansion of the 1980s are widely attributed to "supply-side" policy, when in fact the real effect of Reagan's policies was largely Keynesian. The economic expansion of the 1980s, modest as it was, was built almost entirely on borrowing and was heavily fueled by deficit spending on behalf of the government, in compliance with Keynesian methods.

Reagan reduced taxes through a series of tax reforms starting in 1981 and culminating with the Tax Reform Act of 1986. In actuality Reagan passed a mix of both tax cuts and tax increases from 1981 through 1988, but nevertheless the effect of all of the legislation was a net reduction in overall taxation. The overall effect of Reagan's tax reforms was to modestly increase taxes on the poor and to significantly reduce taxes on the wealthy. In a televised address in 1981 Reagan stated, "tax revenues, in spite of rate reductions, will be increasing faster than spending, which means we can look forward to further reductions in the tax rates," claiming that tax reductions would lead to increases in revenue as the tax reductions spurred economic growth, and that as revenue increased this would allow for continuing tax reductions.

The reality was that the tax reductions led to revenue losses. Tax collections did increase under Reagan, but at the slowest pace of any post WWII president, with the increases being largely due to population growth. The increases did not outpace spending, and deficits consistently rose under Reagan. Far from paying down the national debt as Reagan had promised, the national debt rose from $900 billion when Reagan entered office to $2.9 trillion by the time he left.

When assessing the effects of the Reagan tax cuts on the economy, virtually everyone leaves out the impact of the deficits. Economic growth did occur during the Reagan presidency in loose association with tax cuts, however those tax cuts were accompanied by decreasing interest rates and increasing public and private deficit spending.

As a result of the tax cuts, middle income and wealthy Americans had more money in their pockets than they otherwise would have, but tax revenues did not in fact rise as predicted, though government spending rose faster under Reagan than under prior administrations. As a result, "the pie" did indeed grow, but it grew through borrowing. Had government spending been constrained by revenue, in other words had the tax increases gone through and deficits remained the same or gone down due to constraints on spending to keep spending in line with revenues, there would likely have been no simulative effect at all. Part of the theory behind supply-side economics was that tax reductions in and of themselves, by allowing businesses and individual to keep more of their money instead of having the government spend it, would lead to economic growth. This never happened during Reagan's presidency, because what really happened is that businesses and individuals (except the poor) were allowed to keep more of their money, but the government not only continued spending the money that it was no longer collecting, it actually started spending at an even faster rate. What we really had was tax cuts and increased government spending, not tax cuts and decreased government spending. The immediate effects of this twin stimulus of lower taxes and increased spending was of course generally positive in the short term.

However, for a variety of reasons, the reality of the Reagan deficits were not widely acknowledged. By the time Reagan left office in 1988 people widely believed that he had balanced the budget and reduced the national debt, after all, how could someone who made reducing the national debt a central plank of their presidency and message have actually massively increased the debt? Reagan consistently talked about the importance of reducing the debt and this message is what stuck in people's minds, even as Reagan's polices were increasing the debt at a rate not seen since World War II. This resulted in allowing all of the positive results of Reagan's economic policies to be attributed to "tax cuts", creating a widespread public belief that it was the tax cuts that resulted in the economic recovery of the 1980s, as opposed to the Keynesian deficit spending, the taming of inflation, and all of the borrowing facilitated by Alan Greenspan's Federal Reserve policies of constantly lowering interest rates. The economic recovery of the 1980s was a recovery fundamentally built on borrowing. However unlike the borrowing of the FDR era, the borrowing of the Reagan era was widely unacknowledged and generally kept out of mind, as if it weren't happening and as if, at any moment, the balance-sheet would turn positive and the debt would be able to be repaid without ever having had to acknowledge that it happened.

In addition to tax reforms, the deregulation movement which started in the 1970s continued during the 1980s. Again, much like the taming of inflation, many of the important acts of deregulation occurred under president Carter in 1980, just prior to Reagan entering office, though Reagan's presidency is popularly associated with them in part because they were largely implemented during Reagan's time in office. Among the most important were the Regulatory Flexibility Act and the Depository Institutions Deregulation and Monetary Control Act. The stated goal of the Regulatory Flexibility Act was essentially to reduce regulatory requirements on "small" businesses (note that a "small business" is any business with fewer than 500 employees, regardless of revenue). The act also brought businesses, especially "small" businesses, into the legislative process and gave them a significant voice in developing the regulations that they would be subject to.

The Depository Institutions Deregulation and Monetary Control Act both gave the Federal Reserve more control over non-member banks yet also reduced other banking regulations. The passage of this act, as well as additional deregulation of the financial industry that occurred under Reagan, in conjunction with the Tax Reform Acts of 1981 and 1986, contributed significantly to the Savings and Loan Crisis of 1989. Ironically, the Tax Reform Act of 1986 contributed to the crisis by eliminating real estate investment tax shelters which had been created in large part under the Tax Reform Act of 1981. The Tax Reform Act of 1981 contributed to a real estate bubble in the early 1980s, and the elimination of these shelters then significantly reduced the value of many real estate investments, which became a motivation for manipulation of the now deregulated savings and loan industry. Because so many real estate investments were losing money in large part due to the loss of tax advantage of these investments, many savings and load institutions began engaging in fraud in order to try and salvage their investment portfolios. This fraud, among other things is what led to the crisis and the massive tax-payer bailout of the savings and loan industry in the early 1990s under president Bush Sr.

Many other economics trends which had their roots in the 1970s continued into the 1980s as well.

The number of women entering the workforce continued to rise during the 1980s, largely driven, as in the 1970s, by economic necessity. This despite the fact that president Reagan was no friend to the woman worker, having cut funding for the Equal Employment Opportunity Commission, which investigates matters of sex discrimination in the workplace, and not raising the minimum wage to keep pace with inflation during his entire time as president (minimum wage jobs were/are disproportionately filled by women). Women poured into the workforce during the 1980s not just because they wanted to, but because they had to.

As individual earnings for the majority of the population stagnated during the 1980s the pressure for households to have two income earners increased.


http://w3. access. gpo. gov/usbudget/fy2000/sheets/b047. xls

As can be seen below, increases in household income during the Reagan era were driven almost entirely by women entering the workforce. Once again we have a significant component of the economic growth that did take place during the Reagan years which has little or nothing to do with the actual policies of Ronald Reagan. In fact Reagan and his conservative allies lamented the increasing number of women in the workplace and at the very least did nothing to aid the entrance of women into the workplace while at worst perusing polices intended to counter the growing trend.

Another trend which began in the 1970s but came into full swing during the 1980s was the growth of the American trade deficit. The trade deficit of the 1970s was largely a product of the price of oil, but by the 1980s the deficit was dominated by manufactured goods. While the United States had been a net exporter of manufactured goods from the beginning of World War II through the end of the 1970s, the 1980s marked a dramatic shift in American manufacturing and trade.


source: http://www.nytimes.com/2009/05/02/business/02charts.html


source: http://www.epi.org/economic_snapshots/entry/webfeatures_snapshots_20060426/

The shift from the Unites States being a net exporter of manufactured goods to a net importer of manufactured goods coincides with three major things: Other countries catching up to America developmentally after World War II, the rise of the American dollar vs. other currencies during the mid 1980s, and importantly the rise of the financial and retail sectors of the economy.

The growing deficits of the Reagan administration meant that the United States government had to sell an extraordinary amount of government bonds to finance the debt. In order to do this the yield on these bonds was attractive, which led to a huge influx of foreign money coming into the United States by the mid 1980s both to buy the safe and high yield government debt and also to invest privately since at this time the United States was seen as a safe place for investment once inflation had been brought under control. This caused the value of the American dollar to rise significantly against world currencies, most notably the British pound. This made American exports very expensive and it made it much cheaper for Americans to import foreign goods. This situation bolstered an already emerging trend toward greater imports of foreign made goods. By the late 1980s, however, the dollar had lost some value against foreign currencies and this, among other things, helped to boost US exports once again, but the movement toward foreign production and the growing import of foreign made goods was now solidly established.

Large retailers like Wal-Mart became hugely influential in the 1980s and this influence allowed the retail sector to put significant pressure on manufacturers, putting retailers in the driver's seat now instead of the other way around. By the end of the 1980s large national and international corporations had almost entirely taken over the retail sector, having now pushed out virtually all of the individually owned and regional retailers. This consolidation of purchasing power allowed large retailers to put pressure on manufactures and begin demanding price reductions from them. This in turn put pressure on manufacturers to increasingly focus on cost cutting. There were multiple results from this during the 1980s. The three primary immediate impacts were increased imports of consumer goods from foreign competitors, increased pressure on American manufacturers to "contain costs" by downsizing and reducing the growth of worker compensation, and off-shoring of production to foreign countries by American companies.

Off-shoring is when a company builds facilities and/or hires workers in a foreign country to develop goods and/or services that are consumed domestically. Outsourcing is when a company contracts with another company to have them produce goods or services for them. In manufacturing this typically involves outsourcing the production of components to companies that specialize in making those components, etc. Off-shore outsourcing is, of course, when a company outsources to companies in foreign countries. Limited off-shoring and off-shore outsourcing was taking place in America prior to the 1980s, but this was largely confined to the textile and clothing industries. While off-shoring and off-shore outsourcing were taking place in the 1970s, at that time the vast majority of imports to the United States were of products produced by foreign companies. By the end of the 1980s, however, a significant amount of imports in America were imports of products being produced for and sold by American companies.

The off-shoring of production did more than move jobs to foreign countries, it also moved capital and capital development to foreign countries, as well as make regulation of the economy, corporations and capital more difficult. This move to off-shore production coincided with a shift from defined benefit retirement pension plans for corporate employees to defined contribution only retirement plans, largely under the 401(k) tax code. This allowed corporations to take on greater risk and to shift the priority toward short-term goals. When corporations are bound by commitments to long-term retirement liabilities they are forced to plan for long-term corporate performance. As corporations shifted away from defined benefit pensions to defined contribution only plans, rewards and objectives shifted to the short term.

Likewise, the shift from a retirement system based on private pensions to a retirement system based on individual asset holdings brought about an overall shift in the nature of finance and corporate compensation. Under the private pension system the retirement funds of workers were actually managed in a more widely distributed way, because there were many different pensions and each pension was managed separately. Likewise, since the pensions had an obligation to pay out benefits for the life of the contributors, this caused pension managers to be relatively conservative and to always have their eye on the long-term performance of the pension fund, in addition to the fact that pensions are more heavily regulated than the mutual funds, bonds and stocks, which 401(k) account money is invested in.

This was not the case with defined contribution only plans, because with a defined contribution only plan there is no long-term responsibility on the part of the plan administrators. Essentially, pension managers have a significant responsibility regarding the use and perforce of the funds which they manage, however the managers of the mutual funds and other investment vehicles held in private retirement accounts have essentially no long-term responsibilities. Those "retirement accounts" come with no obligations whatsoever on the part of either the employers who contribute to them or the mangers who run them, because ultimately, all responsibility "falls on the individual".

The shift from pension plans to individual retirement accounts did result in a significant increase in individual stock ownership during the 1980s, as can be seen in the graph below, and this increase in individual capital ownership did result in a modest increase in actual stock market wealth held by the bottom 90% of households. However this increase in stock ownership and wealth did not represent an actual broadening of control over capital or in fact a broadening of the return on capital or income security, because what actually happened during the 1980s was that individual stock ownership displaced defined benefit pension participation, such that guaranteed and significant retirement revenue streams were replaced with less stable and less significant individual investments.

This shift meant that while individual ownership went up, it was largely just offsetting a loss in managed benefits promised to individuals, and thus it represented neither an economic gain by the bottom 90% nor a gain in control over capital by the bottom 90%. This was especially true since almost all of the gains in "individual stock ownership" were actually products of indirect individual ownership through investment vehicles such as mutual funds, in which voting rights granted by stocks went to the mutual fund managers, not the actual individual investors. In addition, as in prior decades, gains in stock ownership were also offset by declines in individual business ownership as capital was consolidated by growing corporations, against which small businesses were increasingly unable to compete.


source: http://www.pbs.org/fmc/book.htm


source: http://www.stateofworkingamerica.org/charts/view/210

The shift from pension systems to individual retirement accounts provided more financial freedom to corporations, and at the same time it also played a role in shifting away from long-term performance goals to short term performance goals. This, in conjunction with other economic trends and policies during the 1980s, contributed to the beginning of a trend in significant increases in executive compensation at corporations relative to the wages of corporate workers, a trend which would become even more prominent in the decade to come.

Globalization and the Tech Economy

On the whole, the American economy appeared quite robust during the mid to late 1990s. There was significant economic growth as measured by GDP and unemployment remained low at around 5%. The 1990s is known as one of the longest periods of continuous economic expansion in US history. Deregulation of the economy continued under the Clinton administration, culminating with the Financial Services Modernization Act of 1999, perhaps the most significant and far reaching of the deregulatory actions taken since the Great Depression.

By the early 1990s multiple conditions arose that contributed to an explosion of globalization, led strongly by the United States. With the collapse of the Soviet Union in 1991 the global political environment became more open than it had been at any time since before the Great Depression. Technological revolutions in computers and communications, punctuated by the growth of the internet, made international communication and commerce easier and more efficient than any prior time in human history. Industrialization and modernization in developing economies, such as China and India, vastly expanded the collective productive capacity of the global population.

Multiple trade trade agreements were passed during this period, the largest and most well known of which was the North American Free Trade Agreement, or NAFTA, between the United States, Canada and Mexico. These agreements generally eliminated or significantly reduced tariffs on imports and exports between the countries, in addition to streamlining paperwork and various customs processes for wholesale goods, etc.

America's largely domestic post World War II economy transformed rapidly during the 1990s. Imports went from the equivalent of 5.5% of GDP in 1970 to almost 15% of GDP by 2000. Exports increased as well, but not as rapidly as imports.


source: http://www.cesifo-group.de/portal/page/portal/ifoHome/a-winfo/d3iiv/_DICE_division?_id=6746666&_div=6746716

This meant that, on the whole, Americans were getting richer, at least in the short term. When a country imports more than it exports, it means that it is receiving more than it is giving back. As such, a net importer country is superficially getting the better deal, because at the present time that country is "getting" more than its "giving". This fact inflates the standard of living beyond the means of the net importer country. It means that the country that is a net importer is living beyond its means, enjoying a higher standard of living than the inhabitants are actually producing, because some portion of the material assets consumed by the population is effectively on loan from other countries.

On an individual basis, it is as if you made 2 sandwiches and traded them to someone for 3 equivalent sandwiches. Clearly, you are getting the better end of the deal, at least for the moment, however if this goes on day after day and the person who gives you 3 sandwiches for every 2 you give them is keeping a tally and expects at some point to be paid back for all of the extra sandwiches they have given you, that debt can accrue over time to become problematic.

The imbalance of trade with Asian countries, such as China and Japan, increased significantly during the 1990s and was sustained in part by Asian countries purchasing significant quantities of US treasury bonds, a.k.a. American debt, which acted to inflate the value of the US dollar against their own currencies, thus favoring the American purchase of Asian made goods (a strategy which accelerated during the 2000s) and provided the means by which America could afford to import more than it exported. This policy helped (and continues to help) drive export based economic growth in developing countries.

Unlike the rising trade deficits of the 1980s, however, the trade deficits of the 1990s were accompanied by a much more fundamental shift in the economy. The trade deficits of the 1980s were dominated by imports of goods from foreign manufacturers, while by the 1990s the trade deficits had become dominated by goods manufactured in foreign countries for or by American corporations. Whereas 20% of American workers were employed in manufacturing in 1992, by 2002 only 14% of American workers were employed in the manufacturing sector.


source: http://www.bls.gov/opub/mlr/2000/12/art1full.pdf


source: http://www.census.gov/epcd/ec97sic/E97SUS.HTM
Notes: The US population grew by 6.2% from 1992-1997
"Service industries Exempt" is government and non-profits

So why would a country like China be willing to give more to America than it gets back, why not just keep the extra goods domestically and trade less to America? Essentially, developing countries, especially China, were willing to give America more goods than they received back, and to support this on-going deficit through the purchase of American bonds to prop-up American currency to enable the on-going imbalance, in order to facilitate the development of productive capital in their own countries. Most specifically, what these policies did was they facilitated a transfer of capital from America to the host producer country. What this means is that, while there was net growth in global productive capacity during the 1990s, there was also a significant transfer of productive capacity, such that some portion of increased productivity in developing nations, especially China and Mexico, came at a cost to American productivity. Global economic productivity was not an entirely zero sum game during the 1990s, but it was also not a universal rise in the water level either. Why would someone give you 3 sandwiches in exchange for only 2 back? It seems to make no sense, except, in this case, what the Chinese and other developing countries were getting in exchange for giving us more commodities in the present, is they were luring capital away from America and inducing it to relocate to their country, thereby increasing the capital stock, and thereby the long-term productive capacity, of their nations beyond what would otherwise have been possible in a true "free-market", in part through the use of national monetary policy. They essentially allowed their countries to be under-compensated in the short-term in order to establish a better long-term position.

What basically happened during the 1990s was like two ranchers, one of which (rancher A) started out with 100 horses while the other (rancher B) had 10 horses. Over a period of some years, both ranchers increased the size of their stock, so that they had 200 horses between them, but at this time rancher A had 120 horses and rancher B had 80 horses. Of the 80 horses that rancher B had, 30 of them had actually come from rancher A, which rancher B had obtained by trading excess feed to rancher A in exchange for the additional horses. So rancher B's economic growth was a product of both production of new capital on his own ranch and transfers of capital from rancher A. While rancher A's capital stock did increase over time, it was actually at a lower rate than it would otherwise have been due to the transfers of capital from A to B, in exchange for commodities.

In the American scenario, however, we don't have a single rancher making the exchanges, what we have is an economy of thousands of different publicly and privately owned corporations, millions of workers, and federal and state government all combined together. During the 1990s, American corporations (and their executives and shareholders) benefited from the monetary and fiscal policies of governments, which made the transfer of privately held capital from America to foreign countries profitable. Foreign countries like China and Mexico, etc. provided both direct incentives to American companies for them to relocate capital there, as well as indirect incentives through monetary policies, which the US accommodated. This, in turn, was one of the components that fueled the sky rocketing executive compensation and stock prices of the 1990s. Essentially governments, both foreign & domestic, bore much of the cost of capital relocation from America to developing economies, while executives and stock holders reaped much of the reward and American citizens benefited as consumers from reduced commodity prices.



source: http://www.faireconomy.org/research/CEO_Pay_charts.html

Another major component of the 90s economy, and the associated stock market bubble, was of course the growth of the Information Technology sector, and the most significant component of this growth during the 90s was the rapid adoption of internet technology and the growth of internet related companies.

After decades of government research and development, and the passage of legislation in the 1980s and 1990s to open the internet up to public and commercial use, as well as government subsidy of internet related business through preferential tax treatment, the late 1990s saw rapid expansion of internet use and the rise of many internet related commercial enterprises, from internet service providers to content providers to commercial search engines to gaming companies to business-to-business service providers to companies that sold tangible commodities via websites, and more. Nevertheless, so-called e-commerce accounted for less than 1% of overall US retail sales by the end of the 90s, and still accounts for less than 4% today.

What effectively happened during the 1990s was that an entirely new form of capital emerged, virtual capital, which had been created by the government via it's development of the internet. Virtual capital is basically web domains or web sites. Like land, a web domain is a form of property, but unlike land web domains are virtually infinite and inexhaustible. Historically capital has come in the form of animals and slaves, then land as farming was adopted, then machines and intellectual property, and most recently we now have the development of virtual capital with the emergence of the internet. As with prior instances of the emergence of new forms of capital, the emergence of virtual capital brought about rapid economic change, changes in the economic order, and also coincided with rampant financial speculation.

New forms of capital do not make old forms of capital obsolete, but they do create new "economic environments" in which new forms of development may occur, and the relative importance of older forms of capital in the total economy is changed. As such, economic and social orders built on control or dominance of old forms of capital may be challenged when new forms of capital emerge and become dominated by different organizations, classes, or individuals than those who dominated the older forms of capital. This is what happened when machinery and intellectual property changed the relative value of land as a form of capital via the industrial revolution, thus contributing to the overthrow of the feudal systems built on land ownership. This occurs because when new forms of capital are created there is no established pattern of ownership, and if the dominant owners of the traditional forms of capital do not have the means, the knowledge, or the foresight to control the new form of capital, then that new form of capital can become a vehicle for the entry of new individuals, organizations, or classes into the the economy and thus a vehicle for the rise of challenges to existing systems of economic and political power.

During the 1990s there was a rapid rise in the number of "new millionaires", individuals who came from modest backgrounds and became wealthy, largely through business ventures.


source: http://www.the-crises.com/wealth-inequality-in-the-us-2/

A significant portion of these new millionaires became wealthy directly via internet or computer related businesses which they either started or were involved in. Much of this wealth was a product of stock ownership in corporations founded by such individuals. As corporations went public during the mid to late 1990s, i.e. issued publicly tradable stocks, the founders and principle stockholders of new technology companies received massive wealth through their stock ownership. For example, when e-Bay went public in 1998 the company had around 30 employees and revenues of only $4.7 million a year, yet founders Pierre Omidyar and Jeffrey Skoll became billionaires overnight as e-Bay stock hit $53.50 per share during the first day of the IPO. These individuals became billionaires even though they had not actually produced billions of dollars of value.

Even as the number of new millionaires increased during the 1990s, however, overall economic inequality increased as well. The percentage of the population who were "super-rich", and the magnitude of wealth owned by the super-rich, increased rapidly during the late 1990s, but even though the number of rich people was increasing, the gap between rich and average Americans was increasing as well.


source: http://www.the-crises.com/income-inequality-in-the-us-1/

In terms of overall economic impact, "internet companies" typically have a low ratio of workers to revenue, which generally translates into higher incomes per employee, but relatively low "job creation" per company. Due to the nature of the internet, and software in general, large revenues can be supported by a relatively small number of workers, in some cases entire companies or products were developed and launched by single individuals or small teams of 5 or less people. Thus, internet businesses themselves made a small number of people very rich relatively quickly, but had very narrow impacts on total US income. Incomes from such businesses were highly concentrated and did not account for the overall increases in job creation during the 1990s.

Looking at revenue per employee for 2010 for a select sample of companies, one can see that information technology companies generally have a lower ratio of employees to revenue. Information technology companies are much more mature today than they were in the 1990s, when their revenues and employee counts were significantly lower than they are now. So, while there was a significant increase in employment in the information technology sector during the 1990s, and while a significant portion of the "new millionaires" came from that domain, in terms of direct employment "internet startups" accounted for a relatively small portion of direct job creation during the 1990s. Within the IT sector there was a greater increase in job creation within the IT departments of established entities, like banks, hospitals, government, telecom companies, department stores, etc.

Company Revenue Employees Revenue per employee
Exxon $383,221,000,000 83,600 $4,583,983
Google $29,321,000,000 26,316 $1,114,189
Amazon $34,204,000,000 33,700 $1,014,955
Warner Music $3,491,000,000 4,000 $872,750
Ford $128,954,000,000 164,000 $786,304
Microsoft $62,484,000,000 89,000 $702,067
General Electric $150,211,000,000 287,000 $523,383
e-Bay $9,156,000,000 17,700 $517,288
AT&T $124,280,000,000 294,600 $421,860
Kraft Foods $40,386,000,000 97,000 $416,350
Kroger $82,189,000,000 334,000 $246,074
Home Depot $67,997,000,000 321,000 $211,828
Wal-Mart $421,849,000,000 2,100,000 $200,880
Sears $44,043,000,000 355,000 $124,064
J. C. Penney $17,759,000,000 156,000 $113,839
McDonald's $24,075,000,000 400,000 $60,187

So, if the "internet boom" itself didn't directly account for the low unemployment rates of the 1990s then what did? The specific industries that accounted for the most job gains are provided below, essentially all of which fall into the services sector, but the gains went beyond these industries into construction, as well as some forms of manufacturing.


source: http://www.bls.gov/opub/mlr/2000/12/art1full.pdf

The real question is, why and how did the economic expansion of the mid to late 1990s occur, thus supporting this employment growth? I believe that the primary answer to that is inflation and interest rates, namely low inflation and low interest rates. Why did America have low inflation in the 1990s? Because of efficiency advances, the off-shoring of manufacturing, low energy prices, and the driving down of commodity prices by large retailers as they engaged in fierce competition.

Here's what really happened: there was a lag effect in the relationship between wages and inflation. Going into the 1990s wages continued to increase at the same level they had for the past decade, which is to say not quite keeping up with inflation. However by the mid 90s the rate of inflation growth decreased as the effects of off-shoring and retail competition took hold. Wage increases, however, stayed on their same trajectory, and thus by around 1996, since inflation had dipped but wage growth didn't, average workers were relatively "richer" than they had been in decades, even if only a little bit. This led to increased demand, which created a cycle of demand leading to increased job creation and thus actual wage growth, etc.

But this wasn't all that was fueling the growth, because credit, fueled by the low interest rates, was also playing an important role, both for average Americans as consumers and home buyers, but also for businesses and especially in the financial industry. So we had a dual effect of inflation briefly under-pacing wage growth coupled with a massive influx of credit. The massive influx of credit was a product of increased savings in countries like China and India along with Federal Reserve policy of low interest rates, which, along with financial deregulation, contributed to growth in the financial industry that further facilitated the expansion of credit. The low interest rates, low inflation, and expansion of credit led to a construction boom and increased consumer spending.


source: http://en.wikipedia.org/wiki/Financial_position_of_the_United_States#Debt

And this brings us to the investment bubble. Along with the above mentioned economic factors, the continuing shift from pension plans to 401(k) style retirement plans resulted in a growing number of people investing in the stock markets, in addition to increasing individual investors and "day traders" who flooded into the stock markets in the late 90s as on-line internet based trading platforms gained popularity, making it easier than ever before for individuals to directly buy and sell stocks. The financial services industry saw the greatest income growth of any industry in the late 1990s as money flooded into  the financial markets. The "internet boom" simply provided enough of a plausible explanation for ballooning stock valuations to allow many people to believe that the stock market gains of the late 90s were "real".

Once again, a widespread belief took hold that we had entered into a new era of economics, and once again there was a widespread belief that government regulation had become unnecessary or even detrimental. The economic team of the Clinton administration was comprised heavily of strong "free-market" advocates, such as Robert Rubin, Alan Greenspan, and Larry Summers, who were fundamentally opposed to the concept of financial and economic regulation. Indeed through the late 1990s the apparent wisdom of this view gained support with each increase in the levels of the stock markets.

However, just as in the 1920s in the midst of the industrial revolution, an economy heavily fueled by consumer credit, highly leveraged institutional investments, and a widespread public belief that investments were a no-lose proposition, led to the rise of investment bubbles. Just as in the 1920s, real economic growth and technological revolution seemed to provide a plausible explanation for rising stock prices, yet underlying the economic growth was growing economic inequality and unsustainable levels of consumer and financial debt.

Even though average incomes were growing marginally faster than inflation, in fact there was also a growing separation between top and bottom incomes, and despite gains in real income, savings continued to decline as consumption was increasingly fueled by credit, the availability of which expanded rapidly during the 1990s in large part due to unregulated "financial innovation". While consumption was driven largely by household and business debt, market prices were simultaneously driven by increasingly leveraged speculation by institutional investors, namely the emerging and almost entirely unregulated hedge funds as well as growing investment banks, through the use of unregulated derivatives.

Even though the percentage of Americans who owned stocks increased dramatically during the 1990s due to the growth of 401(k) style retirement plans and individual stock trading as millions of Americans entered into stock market speculation for the first time, the portion of the nation's capital owned by the bottom 90% of the population actually continued to decline. Despite the fact that the percentage of Americans owning stock by the end of the century had increased in just ten years to over 50% from roughly 30% as of 1990, the portion of stock market wealth owned by the richest 10% of the population had remained essentially unchanged over that period.

While much was made about startup companies in the press during the 1990s, in fact the rate of entrepreneurship declined during the 1990s as wages rose and people sought employment with established organizations during the "good" economy. In addition, the rapid growth of multinational corporations during the 1990s and the continued expansion of large corporations in the services sector resulted in the continued decline of direct capital ownership and control as small businesses were pushed out of the market by larger corporations.


source: http://www.american.com/archive/2010/april/entrepreneurship-whats-in-a-name

In fact what we find is that since the end of World War II self-employment and entrepreneurship in America have had an inverse relationship to employment rates and median incomes, meaning that when employment and/or wages go up self-employment goes down, and when employment and wages go down, self-employment goes up. What this means is that entrepreneurship in America is often not a product of opportunity, but rather of desperation. This desperation driven entrepreneurship, just as was the case during the Great Depression, typically involves little or no capital accumulation by the self-employed, creates few new jobs, and relies heavily on the manual labor of the self-employed individual. As a result, increases in desperation driven "entrepreneurship" do little or nothing to actually increase the rate of capital ownership in the population, and thus in good times or bad, whether self-employment rates were increasing or decreasing, the rate of capital ownership and control continued to decline through the 20th century, with the "economic expansion" of the 1990s being no exception.

The Bursting Bubble

On March 10, 2000 the NASDAQ Composite Index peaked at over 5,000, after which it declined precipitously to around 1,200 in October of 2002. The NASDAQ stock exchange was (and remains) heavily populated by technology companies, with relatively few older and established companies listed on it compared to the New York Stock Exchange, which saw a relatively smaller decline over that same period.

The day that the NASDAQ began its fall is typically considered the day that the "dot-com bubble" burst, and technically that is accurate, but the bubble actually began to rupture about two years prior to that in late 1998, when the hedge fund Long Term Capital Management imploded virtually overnight, requiring a bailout engineered by the Federal Reserve, shortly after which the Federal Reserve began raising interest rates, which in turn accelerated the inevitable bursting of the bubble.

Long Term Capital Management (LTCM) was a highly leveraged hedge fund run by top economists, including former Federal Reserve vice-chairman David Mullins, that operated in almost total secrecy with no regulatory oversight. Even LTCM investors had virtually no information on how the fund operated, they just knew two things: the historical rate of return and (they were assured) that everything was legal. LTCM used derivatives and bond market arbitrage to generate high and, so it's operators thought, risk-free profits through extremely high-volume low-margin exchanges. Basically, they profited from small price differences in the bond markets, but in order to do this it required very large transactions since the price differences were small. In order to engage in these large transactions LTCM used highly leveraged derivates, resulting in trillions of dollars of exposure based on only a few billion dollars of actual assets. At its height LTCM was generating a 40% return on investment. Virtually every major bank and Wall Street financial firm had investments with LTCM. In mid-1998 a series of "unforeseen economic events" resulted in massive losses in LTCM's portfolio, requiring liquidation of additional assets to cover.


The value of $1,000 invested in LTCM, the Dow Jones Industrial Average and invested monthly in U.S. Treasuries at constant maturity.
source: http://en.wikipedia.org/wiki/Long-Term_Capital_Management#Downturn

The result was an almost immediate collapse of the fund. As a result, the Federal Reserve organized a private bailout of the hedge fund by its own investors, who collectively put up $3.5 billion to buy the  fund, in efforts to prevent a major panic and financial meltdown had the fund become completely insolvent. The bailout worked in its fundamental intention, which was to prevent a major financial panic, but it was also a sign of things to come, namely greatly under-evaluated risks and highly leveraged investments that were primed for collapse. However, the success of the bailout of LTCM, which had gone largely unnoticed by the public, allowed the market charade to go on a little longer.

By the end of the 1990s an estimated 7.5 million Americans had on-line brokerage accounts, with as many as 5 million of those engaged in day trading. With stock markets setting new record levels each day the lure of easy money, and the sense that if you weren't getting it you were getting left behind, became widespread throughout the country. By late 1999 some individual high-flying internet stocks were already starting to fall as internet startups began to fold, but the overall market continued to grow until early 2000.

The ensuing decline of so-called dot-com stocks was inevitable, though up to the decline major "market analysts" and financial news outlets were touting the soundness of stock market investment. As with LTCM, many investors were operating in an information vacuum, but worse, large scale fraud and misinformation was also rampant in the markets at multiple levels. During the lax regulatory environment of the late 1990s corporations were misleading regulators (who largely just took their word on many issues), corporations were misleading investors and employees, and brokers and fund managers were misleading individual investors. This was all compounded by the fact that so-called trusted authorities were giving the public a constant stream of reassurances in the markets and encouragement to invest.

By 2001 it had become apparent that at least dozens of major corporations had engaged in massive accounting and reporting fraud, having inflated their earnings for years and engaged in all manner of market manipulations and deceptions.  The two most famous cases are Enron and WorldCom. Enron was the first such corporation to have its financial manipulations revealed, which began an outpouring of revelations about other firms who had engaged in similar practices. As these revelations came to light the stocks of these companies collapsed, and many of them went completely bankrupt. The effects of these revelations, along with the September 11th, 2001 terrorist attacks, resulted in broader stock market losses by the end of 2002, bringing the DOW Jones industrial average under 8,000 from its peak of almost almost 12,000 in early 2000. All-in-all over 1,500 firms ended up restating  their earnings, indicating that these firms had engaged in manipulative accounting during the 1990s.

The rise and fall of Enron was heavily tied to deregulation and fraud on multiple levels. During the 1990s deregulation of the energy sector took place, enabling the sale of energy commodities, such as electricity and natural gas, at market prices. Enron manipulated energy markets, was responsible for the creation of unnecessarily brown outs and blackouts, overscheduled transmission lines, etc., all resulting in inflation of energy prices on the one hand, thus inflating real profits, and simultaneously engaged in over-reporting of profits through accounting gimmicks and flat out false profit statements on the other hand. So not only were their profits inflated due to illegal and manipulated activity in the first place, but they actually over-reported their already inflated profits as well.

Enron was no small company; prior to its collapse it had become of the of biggest and most influential energy companies in America. The company also won numerous awards and its stock was heavily recommended by numerous market analysts. Fortune had named Enron the "Most Innovative Company of the Year" six years in a row and the company and its executives were frequently praised by market analysts and business organizations as corporate leaders in the new economy.

Ironically, the over-stated profits by Enron and thousands of other US firms during the 1990s resulted in those companies paying higher corporate taxes than they otherwise world have, had they stated their real (much lower) profits. The primary reason that these companies over-stated profits, despite the fact that doing so was in fact worse for the company, is that the executives were all heavily invested in the company's stocks and compensated with stock options. By over-stating profits this caused stock prices to go up, thus netting millions in over-sized bonuses and stock profits for the executives. Investors were not inclined to deeply question the profits because the the rising stock prices benefited them as well.

But it wasn't just the corporations who were engaged in fraud and market manipulation, brokers were as well. Merrill Lynch was the most high-profile broker to get caught. The problem was particularly prominent among many recently deregulated investment banks that also operated as brokerages. The financial deregulation of the 1990s made it easier for large financial firms to engage in multiple lines of business, like investment baking and brokerage and simultaneously broke down barriers between these lines of business within companies. Corporations were the clients for the investment banking side of the company and investors were the clients for the brokerage side. As a result, there was an incentive for the brokerage business to bring in money for the corporate clients in the investment banking side. The role of the investment bankers was to raise capital for their corporate clients, however the role of the brokerage was supposed to be to advocate the financial interests of the investors. What became a widespread practice by the end of the 1990s was that many of America's largest and most prominent brokerages were knowingly and intentionally rating stocks as good investments that they knew were in fact bad investments, and they intentionally advised investors to buy stocks that their own researchers believed were horrible investments. Brokerages reserved their real analysis and advice for their largest clients and gave bad advice to media outlets and small individual investors. As a result they were directly benefiting their large investors as well as their corporate investment banking clients at the direct expense of small individual investors.

Merrill Lynch eventually paid over $100 million in a settlement to defrauded investors, but this figure pales in comparison to the real market effects of this type of activity, not to mention the hundreds of millions, if not billions, of dollars that went to the investors and corporate clients who benefited from the intentionally misleading investment advice of such brokers. The effects of the 2000-2002 bust can be seen in the impact that it had on the share of stock market wealth owned by the wealthiest 1% of the population, whose percentage of total stock market ownership increased from 1999 to 2005 as a result of the losses that fell disproportionately on smaller investors.

Doubling Down

Following the stock market collapse of 2000 and 2001 and the revelations of corporate accounting fraud a few new regulations were put into place under the Bush administration, namely the Sarbanes–Oxley Act of 2002 and the Pattern Day Trader Rule 2520 put in place by the Securities and Exchange Commission in 2001.

The Sarbanes-Oxley Act, also known as SOX, was one of the most significant pieces of new business regulation introduced in decades. The act basically increases the transparency and reliability of corporate accounting and reporting by requiring specific standard record keeping and accounting and auditing practices. It only affects publicly traded companies, it does not apply to privately held companies at all. While the act does increase regulation of "business", the beneficiaries of the act are investors, i.e. capital holders. In essence, it is an act that increases government responsibility for policing the practices of capital controllers for the benefit of capital owners. So in the basic sense, what the SOX act does it is puts the burden of ensuring that corporate executives are doing what they are supposed to be doing and not lying to their shareholders on the back of the government, thereby transferring the cost of capital oversight from capital owners to the government. This reduces costs and risks for capitalists, which is why the SOX act was heavily supported by "Wall Street" and investment institutions. In essence, enforcement of the SOX act is another means of subsidizing "Wall Street", or the "investor class".

The Pattern Day Trader rule only effects small traders, it does not effect large institutional traders, and as such it has basically been seen as a "scapegoat regulation", one that can be pointed to as having done something while not actually having a meaningful impact. The rule restricts the number of trades that an individual can make on margin using an account with less than $25,000 in it. As such this rule impacts a small number of traders and a small volume of trading, since the significant volume is executed by institutional traders with accounts of millions or billions of dollars.

The full picture in regard to regulation during the Bush administration is quite complex. No significant formal deregulation took place under George W. Bush, and in fact regulatory spending increased dramatically under Bush. Indeed George W. Bush presided over the largest increase in regulatory spending as well as the largest total amount of regulatory spending of any American president in history.


source: http://reason.com/archives/2008/12/10/bushs-regulatory-kiss-off

But total spending and the total number of new regulations does not tell the whole story. Much of the regulatory increase fell under the Homeland Security Department, and dealt largely with national security issues. The Transportation Security Agency (TSA) saw the largest increase in regulation, and aside from SOX, most of the other financial regulations put into place dealt with financial security and ways to track money from foreign sources, etc. as part of efforts to track down and freeze the assets of suspected terrorists. So while there were significant increases in total regulation under George W. Bush, the primary focus of these regulations were national security and protecting the interests of capital owners, unlike prior generations of regulations which were primarily focused on labor protection, consumer protection, banking stability, and environmental safety.

The regulatory record of the Bush administration in other traditional areas is even more complex however. In many cases, such as environmental and workplace safety regulation, regulatory departments were defunded and regulatory enforcement was lax. In other areas, however, funding and regulations were increased and enforcement became extremely stringent. A key example of this was the Bush administration's approach to union regulation. Under the the Bush administration the budget of the Office of Labor Management Standards was significantly increased and regulatory enforcement became much more strict, the net effect of which was to make the establishment and administration of unions much more difficult and costly, which led to increased sanctions against unions for regulatory failures.

Even after the corporate accounting scandals of 2001 and 2002 which led to some new regulation, overall regulation of corporate activity continued to decline, arguably resulting in a string of high profile and costly failures. Examples include the lack of oversight of the airline industry and corruption at the Federal Aviation Administration which led to, among other things, the grounding of hundreds of planes due to safety problems, lack of food source tracking systems which led to inability to effectively handle multiple salmonella outbreaks, lack of food safety inspections which led to problems with tainted peanut butter, beef and eggs, lax mine safety regulation and enforcement contributing to fatal mining accidents, inherent conflicts of interest at the Food and Drug Administration coupled with lax oversight which has led to numerous drug safety problems resulting in dozens of recalls of FDA approved drugs after evidence of harmful side effects in consumers (many of which were known or could have been discovered prior to FDA approval), corruption and lack of regulation within the Minerals Management Services which paved the way for the BP oil spill, continued lack of financial oversight by the Securities and Exchange Commission which enabled the Bernie Madoff scandal, and most important from an economic perspective, the continued lack of oversight of the derivatives and credit markets, which contributed heavily to the housing bubble and crash which continues to have a crippling effect on the economy.

The details of the contributing factors to the Bush era housing bubble are complex, but the fact is that shortly after the bursting of the so-called dot-com bubble housing prices began to rise precipitously, peaking in 2006, and then declining just as quickly as they had risen. As with prior bubbles, "leading authorities" endorsed the rising bubble as a positive economic sign and encouraged the public to buy into the phenomenon, at least initially. By late 2004 some talk of a housing bubble did emerge, though it was still more common for analysts to endorse the housing market than to advise caution. By 2005 there were growing calls for caution, though they were matched by calls for confidence in the housing market as well. It was widely reported in 2005 when Alan Greenspan stated that, "without calling the overall national issue a bubble, it's pretty clear that it's an unsustainable underlying pattern."

Nevertheless, prices continued to escalate as people both bought homes for investment purposes and bought homes under he belief that if they didn't buy now they would forever be priced out of the market. A frequent refrain heard by The National Association of Realtors and others was that, "we've never seen a national housing bubble in American history, therefore we don't believe that national housing bubbles can exist."

What had become clear to close observers by 2005, however, was that housing prices were rising far faster than incomes (which weren't rising at all for 90% of the population) and were far exceeding the cost of rent for equivalent housing.

In an interview posted on The National Association of Realtors website in 2005, which also reflects views frequently expressed by its representatives on multiple news outlets from 2003 through 2006, they stated the following:

Should we be concerned that home prices are rising faster than family income?

No. There are three components to housing affordability: home prices, income, and financing costs – the latter are historically low. During the last four-and-a-half years of record home sales, there has been a shortage of homes available for sale. As a result, home prices during this period have risen faster than family income. However, in much of the 1980s and 1990s, the reverse was true – incomes rose faster than home prices. On a national basis, according to the Housing Affordability Index published by the National Association of Realtors, a median income family who purchases a median-priced existing home is spending a little over 20 percent of gross income for the mortgage principal and interest payment. In the early 1990s, a typical mortgage payment was in the low 20s as a percent of income, and in the early 1980s it was as high as 36 percent. Overall housing affordability remains favorable in historic terms.

What are the prospects of a housing bubble?

There is virtually no risk of a national housing price bubble, based on the fundamental demand for housing and predictable economic factors. It is possible for local bubbles to surface under the right circumstances, but that also is unlikely in the current environment.
-National Association of Realtors Q & A page; 2005

Despite claims from The National Association of Realtors and others to the contrary, housing prices were clearly not being driven by fundamentals at all. A basic understanding of what happened is that following the bursting of the internet bubble in 2000 investors poured money into real estate believing that it was a "safe investment". This took place in three primary forms, the first of which was individuals and organizations buying investment property directly; the second of which was American individual and institutional investors moving money into real estate investments such as Real Estate Investment Trusts (REITs), buy stock in real estate and construction companies, and investing in mortgage-backed securities; the third of which was foreign banks and investment institutions from Asia at Europe pouring money into the American credit markets and investing in mortgage-backed securities as well.

The massive inflows of investment funds facilitated a massive expansion of credit, but most specifically a very particular kind of credit, which was home mortgage credit, though this also spilled into commercial real estate as well. So the inflow of investment funds helped provide the funds for the credit and the lowering of the federal funds rate by the federal reserve in reaction to the economic recession of 2001-2002 helped to make credit even cheaper, but even that wasn't fully responsible for bubble. What was really required for the bubble were changes in lending practices, a highly flawed securities rating system, and an unregulated and highly manipulated market for the sale of Collateralized Debt Obligations (CDOs), which bundled mortgage-backed securities and derivatives of mortgage-backed securities into investment instruments.

Ultimately, a CDO is a bundle of individual mortgages, or in some cases a bundle of bundles of fractions of mortgages. Because a mortgage is a time sensitive financial instrument (the risk that the first portion of the loan will be paid back is different from the risk that later portions of a loan will be paid back), the loans were stratified, separated into different risk pools, etc., and those risk pools were rated by rating agencies (namely Standard & Poor's, Moody's and Fitch) bundled together, pulled apart, re-bundled, etc. and sold as investment instruments. The performance of these instruments was predicted using predictive analysis based on the ratings, but the entire thing was a house of cards because by this point the entire home loan process had been broken up between multiple individual entities such as real estate agents, appraisers, loan originators, lenders, underwriters, etc., each of which handled only a small portion of the total overall transaction, and each of which had a very narrow view of the process, with each of these entities having strong short-term incentives to manipulate the system and ignore long-term interests since responsibility for the loans was handed off at each step of the process along the way.

Loan originators were incentivized to approve as many loans as possible because that's where they made their fees and once the loan was handed off to the lenders it was no longer their responsibility, so how the loan ultimately performed made no difference to them. Competition in the loan origination market rewarded those originators that provided the greatest volume of loans to lenders and underwriters. Any loan originators who sought quality over quantity were selected against in the market, and thus most loan originators quickly learned this and adapted their practices to the marketplace. Likewise appraisers quickly learned to give the appraisals that the originators wanted. The lenders and underwriters were driving the thirst for quantity over quality because via the securitization process and the creation of CDOs, which meant that they were able to package up the loans and "resell" them without actually holding any risk themselves, the risk was being largely off-loaded to the investors. A major fly in the entire ointment was the fact that everyone throughout the process was incentivized to fudge the facts and there was no regulatory oversight to reign any of it in.

The appraisers over-valued the houses. The originators advised clients to overstate their incomes or at least failed to perform due diligence. The ratings agencies took all of the data coming in to them at face value, even if it clearly made no sense. For example, ratings agencies could easily have determined that income levels as stated on the loans in an area far exceeded the reported incomes for an area by the Labor Department. Indeed it became clear that the ratings agencies weren't simply victims of bad information, they actually engaged in further manipulation themselves and intentionally looked the other  way, because they are not actually unbiased bodies, rather they for-profit corporations who are paid for their ratings by the consumers of the information, and their ratings served a purpose for the consumers of their information, who were the financial institutions that were creating the CDOs. Those financial institutions wanted "good grades" on the loans so that they looked better to prospective investors. The banks paid for the ratings, not the investors, and so the banks got the ratings that benefited them. Again, in this case market competition actually helped to drive the bad practices. Since there are multiple rating agencies, banks would "shop around" for the "best ratings", in other words, if one agency gave higher ratings than another then that agency would be favored and attract more business, which drove the ratings agencies to inflate their ratings to retain or attract business.

Ultimately, this whole process spiraled out of control, and the result was a credit bubble for home loans, which caused acute inflation of property prices. It was as if the federal reserve had started printing a special kind of money that could only be used to buy houses, and instead of keeping this money in line with the regular dollar, they printed two "home buying dollars" for every regular dollar. The credit bubble had grossly over expanded the "money supply" specifically for buying homes.

By 2006 the fact that the housing market was a bubble had become apparent, but how to get out of it wasn't. At this point all of the different players in the home mortgage industry essentially began playing hot potato. While most players in the industry knew that there was a bubble and that it would crash, there was still significant market momentum and there was still money on the table to be had, so everyone was trying to do as many deals as they could and quickly offload the problems onto the next entity in the chain, knowing that they didn't want to get stuck with the bad deals.

This led to even more deceptive practices, especially among the sellers of mortgage-backed securities who were trying to offload them onto investors as rapidly as possible while maintaining the facade that the market was still fine. Hedge funds also began betting against the housing market by shorting CDO investments and the banks who had invested in them. By 2007 the housing market began to decline and the pace of decline accelerated in 2008. Home foreclosures began to increase and it quickly became apparent that the mortgage-backed securities, i.e. CDOs, etc. were vastly overvalued and that many home buyers would not be able to pay the full value of their mortgages. This of course caused the value of mortgage-backed securities to drop, and they dropped dramatically. It became clear that the rating agencies had been giving extremely inflated ratings to the securities underlying the mortgage investment instruments, etc. and many of these investments had been themselves purchased on credit and through the use of leveraged derivates, meaning that credit was used to buy credit instruments, essentially a compounding of the leverage, which was of course what was largely responsible for the credit bubble.

The overwhelming majority of the investors in mortgage-backed securities were large institutional investors, such as "traditional" banks (which were able to invest in these instruments due to deregulation) and investment banks, as well as hedge funds and other financial institutions. The collapse of the mortgage-backed security market in 2008 led to a global financial panic as the significant mortgage-backed assets held by banks and financial institutions around the world rapidly lost value. One of the first financial institutions to succumb to the collapse in the mortgage market was Lehman Brothers, the fourth largest investment bank in America at the time.

Ultimately the federal government stepped in under the direction of the Treasury Department and the Federal Reserve to "bail out" the financial sector through the Troubled Asset Relief Program (TARP), signed into law by George W. Bush in 2008. The TARP program essentially both loaned money to financial institutions at extremely low interest rates and "bought" "troubled assets" from financial institutions at well above market prices.

The important thing to note here is that the primary response by the federal government to the housing bubble and its collapse was to prop-up the financial institutions that caused the housing bubble, and little or no relief was provided to home buyers who purchased homes during the bubble. This was true during the presidency of George W. Bush and remained true under the following presidency of Barack Obama. The Home Affordable Modification Program was included as part of the 2008 economic recovery package signed by George W. Bush and was continued and moderately strengthened under president Obama, however this program, the most significant federal program for assisting home buyers who bought during the bubble, offers barely any financial relief to home buyers.

In order to be eligible for the program mortgage holders must meet the following conditions:

  1. Borrower is delinquent on their mortgage or faces imminent risk of default (Note, this provision led to many mortgage holders advising home buyers to stop paying their mortgage to qualify for the program, and to then subsequently foreclose on their homes instead of helping them use the program to refinance)
  2. Property is occupied as borrower's primary residence
  3. Mortgage was originated on or before Jan. 1, 2009 and unpaid principal balance must be no greater than $729,750 for one-unit properties.

If the borrower is eligible then the monthly mortgage payment will be adjusted to a maximum of 31% of the borrower monthly income through the following means:

  1. Reduce the interest rate to as low as 2%
  2. Next, if necessary, extend the loan term to 40 years
  3. Finally, if necessary, forbear (defer) a portion of the principal until the loan is paid off and waive interest on the deferred amount.

So the program only helps people who are facing foreclosure and only does so by reducing the interest and extending the length of the loan. If it is not possible to get the monthly payment below 31% of income even using the steps above then the borrower is likewise ineligible for the program. What no government program has done, however, is force lenders to write down principles in line with market values after the bursting of the bubble. This means that buyers who bought at the peak of the market face two basic choices, to either A) continue grossly over-paying for an asset that is no longer worth what was paid for it, and may not be again for decades or B) default on the mortgage.

What makes this all the more troubling is the fact that lenders and financial institutions effectively did manipulate the housing market, and they were directly bailed out and their loses were covered by the federal government, while home buyers received virtually nothing in the form of assistance. It is important to note that market manipulation is a crime in the United States, but the laws do not apply to the housing market, only to securities such as stocks and bonds, so even if there were any prosecutions for market manipulation, at best the investors in mortgage-backed securities would be able to make claims, not the actual home buyers.

As home foreclosure rates increased so followed unemployment, but it was really much more than foreclosure rates which drove the rise in unemployment.

The reality is that the housing bubble was just the straw that broke the camel's back. For decades wages had been stagnant, debt levels held by the American working class had been increasing, and the number of workers per household had been increasing and began to plateau at a new normal of two workers for virtually every working-age married household. The economic "recovery" from 2003 to 2007 was essentially completely fueled by credit and was always unsustainable. Had the housing bubble not happened from 2003 to 2007 it's certain that unemployment and GDP during that period would have suffered as well. So the economic "recovery" of the mid 00's was largely a product of the housing bubble itself. Thus, the Great Recession that followed the bursting of the housing bubble was only partly caused by the housing bubble burst, to a large extent however it is simply an unmasking of the real 21st century American economy.

After decades of decline, wages now account for the lowest portion of national income since World War II. What's more, wages for the bottom 95% of income recipients now account for less than 50% of gross national income, headed toward 40%. It is an amazing figure to contemplate: Right now only about 45% of the nation's income goes to the wages of the bottom 95% of income receivers.


source: http://www.stateofworkingamerica.org/charts/view/138

Labor's share of income typically increases during recessions because capital income typically falls during recessions, but this has not been the case during the recent recession. The graph below shows labor's share of national income over time as determined by the Bureau of Labor Statistics, which indicates an accelerating decline beginning in the 1980s.


source: http://www.bls.gov/data/#productivity

The declining share of national income going to wages wouldn't necessarily be a bad thing if the share of capital income going to the bottom 95% of the population were increasing to make up the difference, in fact that would be a good thing, but the reality is the opposite. Not only has the share of national income going to wages been declining over the past 30 years, but the share of capital income going to the bottom 95% of the population has been declining as well. Thus, a larger portion of national income has been going to capital and an increasingly larger portion of that capital income has been going to the richest 1% of the country.

Share of capital income received by top 1% and bottom 80%, 1979-2003

source: http://sociology.ucsc.edu/whorulesamerica/power/wealth.html

During the 1970s and 1980s the financial sector accounted for no more than 16% of domestic profits, yet so far in the 21st century the financial sector has accounted for roughly 40% of domestic profits. In 1947 the financial sector accounted for about 2.5% of GDP, and by 2006 that figure had risen to 8% of GDP.

The rise of financial sector profits certainly corresponds to the decline in labor's share of national income, however this isn't simply because the financial sector is taking a larger share of the national income itself (though that is part of it), it is also because of the effects of the financial sector on the rest of the economy. While the graph above compares financial sector profits to profits in non-financial sectors, the fact is that the financial sector controls the non-financial sector to a large, and increasing, degree. The financial sector plays a huge role in selecting boards of directors, in the selection of corporate executives, in the setting of executive pay, in off-shoring of the non-financial sector, in corporate accounting, in corporate tax compliance, and of course a huge role in the lobbying of government and the creation of the tax code and economic regulations.

It is the financial sector that has most strongly driven the shift away from wages and toward profits. The dominance of the financial sector has grown to the degree that, in essence, all corporations now work for the financial sector. To be more precise, the executives of publicly traded corporations essentially work for the financial sector and to a degree even the executives of privately held corporations do. Even the interests of executives for privately held corporations, such as Koch Industries, Cargill, and S.C. Johnson are often, though not always, heavily aligned with financial sector because these corporations still have to raise investment capital, and even when their interests aren't directly aligned they have often had to follow the lead of publicly traded corporations in regard to compensation practices, lobbying, and off-shoring, etc. due to market forces.

Capital ownership has become extremely concentrated as the financial industry has consolidated and financial wealth itself has become concentrated. Not only that, but of the financial assets held by the bottom 90% of the country, most of those assets are held through mutual funds and other types of administered pools which grant any voting rights to financial institutions instead of the individual investors.

When an individual buys shares of a mutual fund, the individual is supplying the capital, but the mutual fund manager is who gets the voting rights, because the mutual fund manager is who uses that money to buy actual shares, and the mutual fund manager is the holder of the shares. As such they get the voting rights, so interestingly the working class, largely through its investment in 401(k) style private retirement accounts, actually increased the financial sector's control over capital, which the financial sector has used to act against the economic interests of the working class.

The central fact of America's economic history, however, is this: Capital ownership has become increasingly consolidated over time, and dramatically so.

The graphs below illustrate the consolidation of capital that has taken place over time in the United States. Both graphs account for free citizens only.

Both graphs are based on estimates made by me using several different data sources, since there is no clear single data source I have found that provides this information. Estimates from prior to the Civil War are based on land and slave ownership rates as well as general wealth distribution. Estimates from the end of the Civil War through the early 20th century are based on farming and self-employment rates as well as wealth distributions. Estimates from after World War II are based on small business ownership rates, stock ownership and general wealth distributions.

Prior to industrialization virtually all wealth owned by individuals was capital, whereas today most people's wealth is primarily non-capital, meaning that today most people's wealth is tied up in non-productive property, primarily homes, which are no longer a means of production as they were in early America. Today the bottom 90% of the population owns roughly 15% of all financial wealth, which is comprised of stocks, bonds, business equity, trusts, and investment real-estate (which is the most common form of capital held by the bottom 90%).

The most significant drop in capital ownership has been in terms of business equity, i.e. self-owned businesses, including farming. This is, of course, primarily a product of industrialization. Prior to industrialization the United States (and the preceding block of 13 colonies) had not only a farming economy, but an economy based on highly distributed individual capital ownership, where almost all white families directly owned their own capital and worked for themselves. With the rise of industrialization and corporations, however, individual producers were unable to compete with collective production, so individual capital ownership rapidly declined as collective production, controlled by corporations, out-competed individual producers.

In the early American economy total wealth ownership was heavily tied to capital ownership, so as capital consolidation took place the share of total wealth held by the bottom 90% of the population followed the same declining trend. Standards of living for the bottom 90% were still increasing as the size of the entire economic pie grew, but the portion of the nation's total wealth going to bottom 90% was decreasing as the entire pie got larger. (In other words if the bottom 90% received 50% of an economic pie that was 100 units big in 1800 that would be 50 units, but by 1900 they were receiving roughly 33% of a pie that was now 500 units, or 165 units, so while the share shrank the standard of living was still increasing). This trend continued essentially until the progressive reforms of the early 20th century which resulted in a larger share of the wealth that was created going to workers, however capital consolidation continued.

The New Deal reforms of the 1930s and 1940s significantly increased the portion of wealth going to labor and created various social safety nets, resulting in conditions in which the incomes and overall wealth of the bottom 90% of the population became less dependent on capital ownership, while the portion of capital owned by the bottom 90% of the population continued to shrink. What the New Deal reforms did was they made it possible for a middle-class to exist which did not own capital, but the New Deal did not change the underlying fundamentals of the capitalist economy.

Economic power and control remained in the hands of capital owners, and as capital consolidation continued economic and political power was inevitably consolidated as well. The continued consolidation of capital resulted in increased economic inequality as consolidated capital ownership drove profits to a shrinking portion of the population, contributing to a cycle of increasingly concentrated wealth leading to increasingly concentrated political power, which in turn has been used to advance the interests of the increasingly wealthy, leading to greater concentrations of wealth and thus ever greater concentrations of political power, which is used to advance the interests of the wealthy, etc., etc., etc.

Through this process, over the past 30 years the mechanisms for supporting a non-capital owning middle-class have been eroded. Over the past 30 years economic polices have swung back in greater favor of capital owners, increasing the relationship between capital ownership and overall wealth, while at the same time doing nothing to facilitate more widespread capital ownership, indeed the opposite, facilitating continued and increasing concentration of capital ownership.

The persistent decline in overall capital ownership throughout history in America, arguably the world's leading capitalist economy, indicates that concentration of capital ownership is an inherent quality of capitalism, which inevitably leads to centralization of power, fundamental disenfranchisement of the majority of the population, and economic stagnation.

Next, then, we will take a look at economic theory which can be used to explain these qualities of capitalism.

Sources: The primary source for this section is American Economic History - sixth edition; Jonathan Hughes & Louis P. Cain; 2002. Other sources are linked within the text.

The Mechanisms and Impacts of Capitalism

Defining capitalism

Before analyzing capitalism we first have to define what capitalism is, which may sound simple enough, but it is actually quite complicated. This is because the very definition of capitalism is itself skewed by ideological perspective. Definitions of capitalism are used to frame perceptions of capitalism and to focus on specific qualities while taking attention away from other qualities. Proponents of capitalism provide definitions of capitalism that focus on "free-markets" and private property ownership. These are the most commonly used types of definitions in the United States. Examples of such definitions include:

An economic system characterized by private or corporate ownership of capital goods, by investments that are determined by private decision, and by prices, production, and the distribution of goods that are determined mainly by competition in a free-market.
http://www.merriam-webster.com/dictionary/capitalism

An economic system based on a free-market, open competition, profit motive and private ownership of the means of production. Capitalism encourages private investment and business, compared to a government-controlled economy. Investors in these private companies (i.e. shareholders) also own the firms and are known as capitalists.
http://www.investopedia.com/terms/c/capitalism.asp#axzz1VxMk733J

Economic system based (to a varying degree) on private ownership of the factors of production (capital, land, and labor) employed in generation of profits. It is the oldest and most common of all economic systems and, in general, is synonymous with free-market system.
http://www.businessdictionary.com/definition/capitalism.html

On a side note, capitalism is not the oldest of all economic systems, it is widely accepted to have emerged in the late 18th  to early 19th century.

What is missing from these definitions, however, is any sense of the inherent divisions implied by private capital ownership and the necessary distinction between capital owners and the wage-laborers whom they employ. The result of such definitions glosses over this relationship and leave the impression that in a capitalist system everyone is a "capitalist" or that there are no conflicts of interests between members of the private economy. If anything, such definitions tend to portray a conflict of interest between capital owners and government, which is not at all inherent, but leave no impression of inherent conflicts of interest between capital owners and workers or between capital owners and other capital owners.

The definition of capitalism provided by Wikipedia is a little better, it states:

Capitalism is an economic system in which the means of production are privately owned and operated for profit, usually in competitive markets. Income in a capitalist system takes at least two forms, profit on the one hand and wages on the other. There is also a tradition that treats rent, income from the control of natural resources, as a third phenomenon distinct from either of those. In any case, profit is what is received, by virtue of control of the tools of production, by those who provide the capital. Often profits are used to expand an enterprise, thus creating more jobs and wealth. Wages are received by those who provide a service to the enterprise, also known as workers, but do not have an ownership stake in it, and are therefore compensated irrespective of whether the enterprise makes a profit or a loss. In the case of profitable enterprise, profits are therefore not translated to workers except at the discretion of the owners, who may or may not receive increased compensation, whereas losses are not translated to workers except at similar discretion manifested by decreased compensation.

There is no consensus on the precise definition of capitalism, nor on how the term should be used as a historical category. There is, however, little controversy that private ownership of the means of production, creation of goods or services for profit in a market, and prices and wages are elements of capitalism.
http://en.wikipedia.org/wiki/Capitalism

Here are the important points. Capitalism isn't simply an economic system in which there is private property ownership. For example, if you have a society in which everyone privately owns their property but it is a subsistence economy where for the most part everyone creates all their own goods, i.e. everyone is a farmer who lives on their own property, farms for themselves, and consumes the products of their own labor and property, this clearly is not a capitalist system. It's a system with private property ownership, but it isn't a capitalist economy, indeed it is barely an economy at all, since there is little or no exchange of goods and services.

So a capitalist system is one in which commodities are produced for exchange.

As for "private property", this term is often misconstrued to mean all property, but this is not the case; the definition of capitalism rests not on private property rights in general, but private ownership of the "means of production", i.e. capital, hence the term "capital-ism". Even non-capitalist systems may retain private property rights regarding commodities and non-productive property, like houses, cars, etc., but any economy that didn't include private ownership of capital, i.e. property used to create value and derive income, would not be a capitalist economy.

So a capitalist system is one in which privately owned capital is used to produce commodities for exchange.

However, if everyone owned their own capital and worked for themselves to create commodities then no one would be a capitalist. A capitalist is someone who derives their income through the ownership of capital upon which other people are employed to work. In essence, if everyone owns the capital that they use then no one is a capitalist, since if everyone owned their own capital everyone's income would purely be a product of their own labor, no one would derive income from capital ownership. (We'll get into this more later.)

So a capitalist system is one in which non-capital owners are paid wages by private capital owners to create commodities for exchange.

So these are the core components of the definition of capitalism, an economic system in which some people own capital and some people don't, and the people who own capital pay wages, which are determined by markets, to non-capital owners to use the capital that they own to create commodities for exchange in markets.

While these are the core defining characteristics of a capitalist system, there is always some leeway in the application of these definitions. For example in the United States we have had a federal minimum wage for over 50 years which means that labor prices are not always determined purely by markets. Likewise there are many price controls, taxation policies, and subsidies which distort market prices of other commodities as well. Similarly, as has been noted above, roughly half of Americans own some kind of share of capital, i.e. stocks or direct business equity, etc., but owning a tiny insignificant share of capital doesn't make one a capitalist and doesn't qualify for the condition of everyone being a capitalist so that no one is. Over 99% of Americans who own stock don't own enough to have any meaningful influence over the use of capital, i.e. they have no control over capital through their stock ownership. Likewise, the vast majority of American stock owners own such small shares of stock that the income which is generated from it is insignificant. Most American stock holders work for wages and receive virtually no income from stock ownership until they retire and even then depend on supplemental income such as Social Security or a pension in addition to income from their stock holdings. So while the distinction between wage-laborers and capital owners is somewhat blurry in the American system, if we look at primary forms of income and control, the distinctions are still quite clear.

In essence, what we are really talking about when we talk about all modern economies are systems of collective production. Every modern economy is a "collectivist" system of some form or another, including capitalism. The difference in various modern economic systems is merely in how the products of collective labor are distributed. There are no modern economic systems dominated by individuals working in isolation or through self-employment. The driving force behind all modern economics system is collective production, and the defining characteristic of all modern economies is the massive scale of collective production, not only in terms of the size of collective entities, such as corporations that directly employ millions of workers, but in terms of the collective integration of separate entities. The difference between historically defined systems like communism, socialism, fascism, and capitalism isn't in determining whether people will work collectively or not, it is in determining how the products of collectively produced value are distributed. Who owns the work product that is produced by millions of people working together? That's the real distinction between various modern economic systems.

Also notice that in my definitions of capitalism I did not use the term "free" market. There is a reason for this, not the least of which is that defining what "free-market" means is itself even more difficult and controversial than defining capitalism. If we look at the definition of capitalism provided by Investopedia above we see that it includes the phrases "free-market" and "open competition". These are not actually defining characteristics of capitalism. Capitalist systems can exist that operate under "free-market" conditions (of various definitions, which we will address later) and open competition, but they don't necessarily require "free-markets" with open competition. Indeed the American economy is not a "free-market" (by any definition of the term) nor is there 100% open competition. The key defining characteristic of capitalism is private ownership of capital and the employment of non-capital owning wage-laborers by capital owners. A capitalist system is one in which capitalists profit through their ownership of capital. Even if prices are heavily manipulated or if there is heavy dominance by monopolies, if the capital owners still profit through their ownership of capital then it is a capitalist system. There may need to descriptors attached to the term capitalism in such cases, such as state-capitalism or corporate-capitalism or monopolistic-capitalism, but these are still a types of capitalism, types of systems in which profits are reaped by private capital owners through the ownership of capital and the employment of wage-laborers who produce commodities for exchange.

Value creation, compensation, and modes of production

I consider this subject to be the single biggest point of confusion when it comes to economics in general, but especially in regard to capitalism.

Probably the most significant subject of economics, at least from a moral and philosophical perspective, is the issue of contribution vs. reward. Everyone has a fundamental sense of fairness which directs us to believe that, at base, individuals should be rewarded based on their contributions. This is the essence of John Locke's statement that an individual has a right to the value that they create with their own labor. Basically, if I go to unclaimed land and dig up some clay out of the ground and fashion it into a bowl then that bowl is mine, I have a right to it, and virtually everyone agrees with that premise.

The problem with discussions of contribution vs. reward in a capitalist economy, however, is that while virtually all discussions are based on that premise, that premise is completely invalid in a capitalist economy. To be more specific, discussions of value creation and reward within a capitalist framework are almost always framed as if everyone in a capitalist economy is an individual producer and receives nothing more in reward than the value created by their own labor, when in fact no one in a capitalist system is an individual producer who receives the true value of their own labor.

Let's examine various modes of production outside of the context of any given economic system, without regard to property rights or labor markets, or any defined framework for allocating the products of production.

The most basic mode of production is simply individual production. With the individual mode of production it is relatively easy to determine who creates what, and what each person's reward should be. Assuming that there are no existing property rights and a right to the products of one's own labor as described by John Locke, each individual's reward is simply in keeping the products of their own labor. Again, this assumes no existing property rights, which is a fundamental requirement for the basic functioning of John Locke's natural right to property, because if an individual were to labor on someone else's property or using someone else's property, then the whole picture gets far more complicated, as we will later see. So under the individual mode of production, assuming no existing property rights, and a natural right to the product of one's own labor, it is quite easy to determine who creates what and what value each individual is entitled to.

Using the diagram above let us refer to the light blue diamonds as "widgets" and for sake of argument imagine a theoretical economy where widgets are the only commodities that exist. What we can see is that individual A creates 1 widget, B creates 2 and C creates 4. Regardless of why there is a difference between what A, B, and C produce (maybe A is a child, maybe A is disabled, maybe A is elderly, maybe A is simply lazy, or maybe A just isn't very good at producing widgets) clearly there is a difference in what each of the individuals produce.

In terms of basic fairness, assuming that all else is equal, we would say that taking a widget from C to give to A because A has fewer widgets is unfair. This is a position that basically every economist from John Locke to Adam Smith to Karl Marx to Ayn Rand to Milton Friedman is in agreement on. There is, essentially, nothing controversial about our understanding of the individual mode of production. It might be nice for C to give a widget to A, but to forcibly take a widget from C to give it to A would indeed be a form of theft and would be unfair to C.

But now we move on to collective production. By collective production we mean simply any mode of production in which multiple individuals work together to produce goods or services. As we can see, the total number of widgets produced by individual production is 7 and under collective production we have 12 widgets being produced. This represents the fact that individuals work collectively because individuals are able to create more net value by working together than they are able to create when working individually.

Note that for the sake of this discussion, A, B, C do not represent the same individuals under each mode of production. The diagram for each mode of production is separate from the rest, meaning that we aren't to imply that individual A is less productive in the collective production diagram because individual A is less productive in the individual production diagram.

So the interesting thing about collective production is that while it leads to increased net productivity, determining who created what is much more difficult, even without the complicating factor of property rights. Assuming no property rights, how are the widgets to be divided up amongst the individuals who created them? According to John Locke's statement on an individual's right to possession of the value that they create with their labor, each individual should get what they produced. But with collective production it is essentially impossible to determine what each individual produced, since, by virtue of the fact that collective production is more productive than individual production, this implies that "the whole is greater than the sum of its parts", which is to say that a certain portion of what is produced is a product of collectivism itself, i.e. it is not a product of the labor of any individual, it is truly a product of the interactions between the individuals.

If you were to take a small group of people and have them work collectively and then have them divide up the products of their collective labor, in a neutral setting what would most likely happen is that individuals would pay attention to who contributes what and who was working the hardest, etc. and form opinions in their mind about what portion of overall production each individual was responsible for. They could even go so far as to track the exact output or the exact working time of each individual. Accordingly people would then divide up the proceeds of production in proportion to the relative estimated contributions of each individual. That's not how our economic system works and its not how hardly any economic system has ever worked. In reality, under collective forms of production, such as existed in tribal groups, etc. everything was highly influenced by social status, which was itself a highly complex system based, variably, on any number of things, from family history to individual accomplishments to intimidation to political or religious power to bribery, etc. So all kinds of means have existed throughout history that have distorted the ways in which individuals received rewards from collective production, over-compensating some individuals while under-compensating others. Yet, collective production persisted for a variety of reasons, including that fact that even individuals who were under-rewarded based on their relative contribution may still have been better off than they would be under an individual mode of production. Another reason, of course, has been coercive force, be it slavery or simple social pressure, to get under-compensated individuals to contribute to collective production while being under-rewarded. Throughout history the "fairness" of collective production has varied widely from one society to the next, and indeed a major aspect of social structure in every society has been precisely how to deal with dividing up the fruits of collective production.

Now let's move on to hierarchical collective production. Hierarchical collective production is essentially just a form of collective production that involves greater division of labor and specialization, within a hierarchal structure. Fundamentally hierarchical collective production isn't much different than basic collective production, but there are some important distinctions. Within the hierarchical mode of collective production the ability to determine how much value each individual creates is complicated by two primary conditions, the first being division of labor and specialization, the second being the nature of hierarchy itself. When each individual is roughly equal and each individual is doing roughly the same type of work, then evaluating how much each individual contributes is relatively simple. However, division of labor and specialization implies that individuals are performing significantly different tasks. The most fundamental of these divisions may be planning vs. execution, or managers vs. performers, which brings us to the hierarchical aspect.

Under a hierarchical system of collective production some individuals are directly involved in the production of goods and services, while others are only indirectly involved. Generally speaking, "managers" do not directly create value via hierarchical collective production, they (theoretically) add value by enhancing the productivity of the performers that they manage. Looking at the diagram for hierarchical collective production above, we see that all of the widgets are actually produced by individuals B and C, A doesn't create any widgets. This means that in order for A to be compensated some portion of the widgets produced by B and C have to be allocated to A.  Typically, of course, an individual would not manage only two other individuals, as depicted here. This is an important note since managers/planners/administrators, etc. don't produce commodities, all of the compensation for such positions has to be generated by the production of commodities by performers. This is a major reason why hierarchical systems have pyramid type structures, because it takes many performers to support a manager.

It is virtually impossible, however, to determine the real contribution of "managers". Let's use a rowing team as an example. Rowing teams use someone called a coxswain who sits in the boat with the rowers and sets the pace, helps guide the boat, and provides encouragement, etc. Now, let's say that a rowing team with a coxswain performs 25% better than one without a coxswain. Does that mean that the coxswain is fully responsible for 25% of the performance of the team? Let's say that the team has eight rowers and they enter a race (in which all of the other teams also have eight rowers and a coxswain). Let's say that first prize for the competition is $10,000. How should the team divide the prize money? Should the coxswain get 25% of the prize money because a coxswain generally improves performance by 25%? This would result in the following compensation assuming all rowers get the same amount: $2,500 for the coxswain and $938 for each rower. But what does a coxswain do compared to the rowers? The coxswain is certainly an easier job; they don't have to workout or train as intensively, they don't have to exert themselves as much during the race. Let's say, for sake of argument, that simply putting a metronome in the boat would increase performance by 10%, which is less than the coxswain, but more than doing nothing. No one would argue that the metronome or the metronome owner should get 10% of the prize money would they?

So, when it comes to hierarchical collective production determining what each individual actually produces is even more difficult because "non-performers", i.e. managers, executives, administrators, etc., don't directly produce commodities of determined value. To complicate matters even more, hierarchical systems are heavily influenced by social pressures. There is strong incentive to compensate managers more than the performers that they manage, even if they contribute less to overall production than any individual performer, because of the way that people perceive authority. Authorities within hierarchical systems tend to have more control and asymmetric information regarding performance, as well as the ability to affect conditions for those under them. The very nature of the pyramid structure of hierarchical systems means that individuals are generally increasingly powerful the higher up in the hierarchy they are. In relative terms, generally every individual is as powerful as the collective total of the individuals below them in the hierarchy. In other words, in an organization with 100 performers, which are managed in teams of 10 resulting in 10 middle managers, who are managed in teams of 5 by 2 executives, who report to a president, each of the middle managers is essentially as powerful as the 10 performers that they manage, and the 2 executives are as powerful as the 10 middle managers that they manage, and the president is as powerful as the two executives that he manages, which means that the president is 100 times more powerful than any individual performer, and middle managers are 10 times more powerful than any individual performer. Thus, in terms of determining compensation, those higher up in the hierarchy have a disproportionate ability to influence compensation, to distort compensation in ways that don't reflect actual contribution, which invariably leads to distorting compensation in ways that benefit those higher in the hierarchy. This is the basis for the formation of unions to band individuals at the lower levels of hierarchical organizations together so that they can act as a single individual, thus putting the workers, acting as a single entity, on par with others higher in the hierarchy in terms of power, which is something that I'll address in greater detail in a later section.

And now we finally arrive at industrial collective production. Industrial production is also inherently hierarchical, but the difference between industrial production and simple hierarchical production is that under an industrial system of production the tools of production are far more significant. In theoretical terms we ignore the impact of tools when considering simple hierarchical production, but when we consider industrial production we acknowledge that the tools themselves are a major contributor to productivity. Indeed the term "tools" puts things very mildly, what we are really talking about are machines, which is to say automated tools or tools powered by something other than people or animals.

The important difference between simple hierarchical production and industrial production is that changes to the tools of production under an industrial system can have significant impacts on workers in the system. The major advantage of industrial production over other forms of production is labor efficiency, which means that industrial production reduces the amount of labor required to generate a unit of output. Note, however, that this doesn't necessarily mean that industrial production is more efficient in terms of its use of resources, indeed in many cases industrial processes waste more recourses per unit of output than non-industrial methods. The tradeoff with industrial production is in terms of labor efficiency, which means that if it take less human effort to produce a widget using an industrial system, but in the process excess material is wasted, the overall system is more efficient in terms of labor but less efficient in terms of resources. This can still lead to an overall increase in output and overall reduction in commodity prices when the acquisition of raw materials itself becomes industrialized, thus reducing the amount of human labor required per unit of output.

But let's stick to the relationship between tools and workers under the system of industrial production. Tools increase the amount of output each worker is capable of generating, which is a good thing. This means that in total we can produce more goods. It also means that improvements to the tools of production can increase productivity even more, which means that improvements to the tools of production can result in workers going from being able to produce 10 widgets a day to being able to produce 15 widgets a day. But here is the big question: If improvements to the tools of production lead to increased output, who gets the increased output? Furthermore, how can one differentiate between increases in output due to improvements to the tools vs. increases in output due to improvements in worker performance? Note that in this section we are simply dealing with modes of production, not economic models, so things like who owns the tools or how compensation is determined are not defined. So let's move on to the next section where we can examine how this all works within a capitalist system of property rights and labor markets.

Property rights and labor markets

In the previous section we raised a lot of questions about how to determine who actually creates what value under various modes of collective production. It is very easy to determine who creates what value under individual modes of production, but capitalism is inherently dominated by collective modes of production, most specifically the industrial mode of collective production. A "capitalist economy" may include individuals who work for themselves, by themselves, but such individuals are not capitalists and aren't a part of the capitalist system, they are independent individuals who co-exist within a capitalist economy, just as co-ops and state run enterprises may exist within a capitalist economy even though they are not capitalist entities.

When we are dealing with capitalism what we are really dealing with is a particular system of laws that govern forms of collective production. And the particular set of laws that define capitalism, as we have previously noted, are laws defining private ownership of the means of production (i.e. tools and raw materials) and the use of labor markets to determine how value that is created by multiple people working together is distributed. To put it another way, capitalism is a form of collectivism that allocates the fruits of labor to property owners.

We noted above that the discussion of the modes of production was going to ignore property rights for the moment, particularly property rights in relation to capital. This is because property rights in relation to capital greatly complicate the issue of how value that is created by individuals' labor is allocated. Let's go back to the most basic mode of production, individual production. We've said that, in agreement with John Locke's statement that an individual has a right to the products of his or her own labor, every individual should receive the full value of what they create with their own labor. But, what if someone exercises their labor upon property that someone else owns? I used the example previously of someone digging up clay and making a bowl from it, noting that of course this person should have a right to keep the bowl that they made. But what if the person dug up the clay on land owned by someone else? Then who has a right to the bowl? Technically, the material the bowl is made of belongs to someone else, and the value added to the raw clay belongs to the person who made it, but these two "things" co-exist within the same object. Certainly the issue of "having a right to the product of one's own labor" is complicated by private ownership of capital. Objectively we may conclude that the value of the raw clay belongs to the owner of the property and the value added to the raw clay belongs to the individual who exercised their labor upon the earth to dig it up and fashion it into a bowl, but that isn't how it works within a capitalist system.

Here is what is important to understand about the capitalist framework. Within the capitalist framework fundamentally everything goes back to property rights, which means that under a capitalist system the bowl belongs completely to the owner of the land from which the clay was drawn, even if the value of the raw clay was five cents and the value of the bowl is fifty dollars. So 100% of the value belongs to the original property owner, the laborer has no right to any value, and thus is in fact deprived of the value created by their labor. That is, at least, the starting point of capitalism. We have, over time, established laws that grant some rights to laborers, such as minimum wages, guarantees of payment, etc., but fundamentally the right to ownership of newly created value belongs completely to the owner of the property used to create said value. Typically in a capitalist system, however, individuals come to an employment agreement when an individual is going to exercise their labor upon someone else's capital, in which the laborer and the capital owner agree to a set price to be paid to the laborer, with the capital owner keeping full ownership of the resulting work product.

So let's move on to collective production and labor markets to see how this plays out in the big scheme of things. Advocates of capitalism claim that "labor markets" can properly distribute the fruits of collective production among all of those involved in the production process, including workers and capital owners. Labor markets effectively only play a role in hierarchical and industrial modes of production, where workers negotiate their compensation with some hierarchical authority. According to neo-classical labor market theory, the value that an individual worker creates is determined by the price they are able to negotiate for their labor. This view is treated as common place in neo-classical economics, as described in this passage from a paper on the effect of minimum wages by a "libertarian" think tank.

Minimum wage laws, if meaningful, require employers to pay some workers more than they would have earned in an unhampered market economy. For example, whereas the federal minimum wage at this writing is $5.15 per hour, in the absence of this minimum some employers might pay their workers $4.50 per hour. Economic theory suggests that in competitive markets, workers will be paid their marginal revenue product—the amount of revenue that the worker contributes to the firm. That is a wage consistent with the profit-maximizing behavior of employers and the utility-maximizing behavior of employees.

Thus, if a firm in an unregulated market economy is paying its workers $4.50 an hour, it is probable that the marginal contribution of the worker to the firm is about $4.50 in revenue per hour. If, in fact, it were higher, say $5.00, it would be profitable for another firm to offer the worker in question a higher wage than the existing $4.50.
- Does the Minimum Wage Reduce Poverty?- Employment Policies Institute

If this were true then employees would be paid the exact full amount of all of the value that they create. This is the way in which all discussions of income, or value creation and reward, are popularly framed in America today, yet it is obviously incorrect and completely contradicts classical economics, as noted by Adam Smith's statements on wages and profits in The Wealth of Nations and as noted by the classical economist David Ricadro.

The value which the workmen add to the materials, therefore, resolves itself in this case into two parts, of which the one pays their wages, the other the profits of their employer upon the whole stock of materials and wages which he advanced. He could have no interest to employ them, unless he expected from the sale of their work something more than what was sufficient to replace his stock to him;
- Adam Smith; The Wealth of Nations - Book 1 Chapter 6

Thus the labour of a manufacturer adds, generally, to the value of the materials which he works upon, that of his own maintenance, and of his master's profit. ... Though the manufacturer has his wages advanced to him by his master, he, in reality, costs him no expence, the value of those wages being generally restored, together with a profit, in the improved value of the subject upon which his labour is bestowed.
- Adam Smith; The Wealth of Nations - Book 2 Chapter 3

If the corn is to be divided between the farmer and the labourer, the larger the proportion that is given to the latter, the less will remain for the former. So if cloth or cotton goods be divided between the workman and his employer, the larger the proportion given to the former, the less remains for the latter.
- David Ricardo; The Principles of Political Economy and Taxation, 1817

The important phrase of note in the statement from the Employment Policies Institute is "competitive markets". Given the context, this term can only be interpreted as meaning a "perfect market", which is a subject that we will address in detail in a later section on markets; but it is sufficient to say here that the statement that in a perfect market "workers will be paid their marginal revenue product" is technically true in the theoretical sense, yet it is never true in an actual capitalist system because perfect market conditions never exists. Under the conditions of a "perfect market" all profits are driven to zero, under which conditions capitalism cannot operate, which is why, as we will explore more fully, capitalism is dependent upon imperfect, or distorted, markets.

For now let's set theory aside and look at reality. The reality is that, as Adam Smith describes, in order for an employer to have any reason to hire an employee, that worker has to create surplus value above and beyond what is paid to the employee. As Adam Smith describes it, the value created by the worker is divided between the portion that is paid to the worker in the form of wages and the portion that is retained by the employer in the form of profits, which is to say that profits are the value created by workers which isn't paid to workers as compensation, but is rather retained by capital owners.

But the reality of labor compensation is even more complex than what Adam Smith describes, and really demonstrates why the notion that workers could ever be paid the full amount of the value that they create within a capitalist system, or any system, is nonsense. The reality is that in order for a worker to contribute toward profits for a capital owner they have to not only create enough value to cover the cost of their wages plus the profits of the capital owner, they also have to generate enough value to cover the costs of employing them, which is to say that in reality if a worker is paid $4.50 an hour, then in America we also have to consider the additional Social Security contributions of the employer, the employer's workers' compensation insurance to cover the employee, the unemployment insurance paid on behalf of the employee, the cost of the capital used by the employee, the costs of training the employee, the cost to supervise the employee, the cost of administration to process the employee's records and pay, etc., etc.

Thus, a worker being paid $4.50 would actually costs an employer more like $8.00 an hour to employ. So, we know that the value of the employee's labor must actually be at least around $8 in order for an employer just to break even, but if the value of the contribution of the employee is only $8, then it still wouldn't be worth employing them at all either, because at a cost of $8, there would be no benefit to the employer from employing the employee. So, in reality we know that the value created by the employee has to be more than $8.

Therefore, generally speaking, in order for a worker to be "worth hiring" to an employer within a capitalist system they have to be able to create enough value to cover their wages, the capital owner's cost of employing them, and the profits of the capital owner or owners. The diagram below depicts a worker who creates 10 widgets, out of which 5 widgets are paid to the worker to compensate them for their labor (light blue), one widget goes to taxes paid on behalf of the worker (red), one widget goes toward insurance and other costs associated with the employment of the worker (dark blue), one widget goes toward the costs of administration and supervision of the worker (i.e. goes into a pool of widgets used to pay managers and human resources staff, etc.) (yellow), and 2 widgets go to the capital owner and are considered "profits" (green).

In this example a worker under the industrial mode of collective production is keeping half of the value that they create, with the other half going to the costs of employment, managing said worker, and to the profits of the capital owner or owners. How that value is actually split is determined by "labor markets" within a "free-market" system.

Under the individual mode of production an individual's "reward", or compensation, is determined purely by the market value of what they produce, i.e. the end product. If an individual goes out into the woods, chops down some trees, uses them to build a boat, and then sells the boat, they receive the full value of that boat. So an individual's income in that case is a direct representation of what they produce, i.e. their rewards are always reflective of their contributions.

Under hierarchical and industrial modes of production however, where individuals are paid wages, the system is split into two separate markets, the labor market and the commodity market, that sever the relationship between what an individual produces and what an individual receives in compensation. What "capitalists", i.e. owners of capital, i.e. employers, essentially do when they hire someone is they are engaging in speculation, and what any employer is doing is they are hiring a worker at a set price on the bet that what the worker produces will be more valuable than the price paid to the worker. There isn't necessarily anything wrong with this, the employer is taking on risk after all, but the real value of the worker's contribution isn't what was paid to the worker, it is the value of the product of the worker's labor, and given the nature of the relationship between capital owners and workers under hierarchical and industrial modes of production, capital owners are at inherent advantages in the deal. The capital owner benefits from asymmetric information and lack of market transparency, as well as hierarchical power. Indeed there are a vast number of reasons why labor markets result in under-compensation of workers, which we'll address in more detail after a further examination of market theory itself.

Defining "free-markets"

Now to address the issue of "free-markets". The concept of  a "free-market" comes from Adam Smith, was embraced by other classical economics such as David Ricardo, and has of course become a concept central to neo-classical economics. But what exactly defines a "free-market"? First let's look at one of the important descriptions given by Adam Smith:

Secondly, it supposes that there is a certain price at which corn is likely to be forestalled, that is, bought up in order to be sold again soon after in the same market, so as to hurt the people. But if a merchant ever buys up corn, either going to a particular market or in a particular market, in order to sell it again soon after in the same market, it must be because he judges that the market cannot be so liberally supplied through the whole season as upon that particular occasion, and that the price, therefore, must soon rise. If he judges wrong in this, and if the price does not rise, he not only loses the whole profit of the stock which he employs in this manner, but a part of the stock itself, by the expense and loss which necessarily attend the storing and keeping of corn. He hurts himself, therefore, much more essentially than he can hurt even the particular people whom he may hinder from supplying themselves upon that particular market day, because they may afterwards supply themselves just as cheap upon any other market day. If he judges right, instead of hurting the great body of the people, he renders them a most important service. By making them feel the inconveniencies of a dearth somewhat earlier than they otherwise might do, he prevents their feeling them afterwards so severely as they certainly would do, if the cheapness of price encouraged them to consume faster than suited the real scarcity of the season. When the scarcity is real, the best thing that can be done for the people is to divide the inconveniencies of it as equally as possible through all the different months, and weeks, and days of the year. The interest of the corn merchant makes him study to do this as exactly as he can: and as no other person can have either the same interest, or the same knowledge, or the same abilities to do it so exactly as he, this most important operation of commerce ought to be trusted entirely to him; or, in other words, the corn trade, so far at least as concerns the supply of the home market, ought to be left perfectly free.
- Adam Smith - Wealth of Nations Book 4 Chapter 5

Here Smith is discussing the matter of buying and reselling commodities over a short period of time, which was considered to be three months or less, to gain a profit. As Smith had discussed prior to this, laws against such activity had been on the books "since ancient times", as such activity was widely viewed as callous and injurious to the public. Specifically Smith was arguing against a law in England which set the upper limits at which staples like corn could be bought and sold again on the same market within a short period of time. Here Smith is arguing that such activity does the public a service, and that therefore there should be no such restrictions on commodity markets.

There are several key concepts to highlight here. The first of course is to note that in the paragraphs prior to this quote Smith states that laws against this type of speculative activity date back to "ancient times", so Smith was arguing for market liberalization against long standing practices. Secondly, Smith is arguing not for the "wisdom of crowds" (which is what markets are often touted as), but for the wisdom of specialists with in-depth knowledge to drive prices. Third, of course, is the fact that Smith presents the activity of the specialist trader in driving prices up in the short-term ultimately as a public service that will presumably result in lower prices in the future. Thus Smith argues that these corn merchants should be allowed to engage in short-term trading for profit, which can drive up prices in the short-term, because he argues that by doing so they perform a public service by limiting supply in times of plenty thereby conversely preserving supply for times of scarcity, thereby more evenly distributing the commodity supply across conditions of bounty and dearth.

Without going into any further analysis of Smith's thought experiment quite yet, let's simply note that Smith's ideas are foundational to the modern concept of "free-markets", so now let's look at some current definitions of "free-market":

Business governed by the laws of supply and demand, not restrained by government interference, regulation or subsidy.
http://www.investorwords.com/2086/free_market.html

A market economy based on supply and demand with little or no government control. A completely free-market is an idealized form of a market economy where buyers and sellers are allowed to transact freely (i.e. buy/sell/trade) based on a mutual agreement on price without state intervention in the form of taxes, subsidies or regulation.
http://www.investopedia.com/terms/f/freemarket.asp

A free-market economy is one within which all markets are unregulated by any parties other than market participants. In its purest form, the government plays a neutral role in its administration and legislation of economic activity, neither limiting it (by regulating industries or protecting them from internal/external market pressures) nor actively promoting it (by owning economic interests or offering subsidies to businesses or R&D).
http://en.wikipedia.org/wiki/Free_market

So here we can see some of the immediate problems with the concept of a so-called "free-market". The term "free-market" is largely defined simply as a market without government regulation, however implicit in the very same definitions is the idea that without government regulation markets would operate according to "the laws of supply and demand", that there would be no "subsidy", that "buyers and sellers" would be "allowed to transact freely", and that economic activity would be otherwise unhindered.

These assumptions are not just without merit, but real world experience demonstrates to us that such conditions don't exist in markets of any meaningful size. Without government interference there is nothing to prevent interference, restriction, and subsidy by the mafia, an activity for which we have real world examples. Without government interference there is nothing to prevent the erection of barriers to free transaction, used either to extract rents, to marginalize competition, or to punish groups of people based on any number of criteria such as race, religion, gender, etc. Furthermore, there are any number of ways in which non-governmental entities can implement rules and regulations which distort the laws of supply and demand, a classic example being the National Football League's imposition of salary caps and profit sharing across organizations.

So let's look at some specific cases where private actors take action to subvert the laws of supply and demand or prevent people from transacting freely in the absence of government regulation. We can forgo obvious examples like mafia interference and cite practices that are currently legal or don't involve the use of force or threats.

Manufacturers paying retailers to eliminate competition

The classic case here is the case of Coca-Cola and Pepsi paying and providing other incentives to retailers for them not to carry competing products. At certain levels this practice is deemed illegal in the United States, but it still takes place in more modest forms as well. Soft-drink manufacturers still provide incentives to restaurants for exclusive contracts to carry only their beverages. Though it is deemed illegal for soft-drink companies to pay retailers not to carry competing products at all (they have done this many, many times), they can pay retailers to heavily promote their products and not to promote competing products. In such cases soft-drink companies pay retailers for exclusive preferential product placement, advertising promotion, sales, etc. In these cases the retailer is only rewarded if they promote their product exclusively, so they are paying not just for their own promotion, but also they are paying them not to promote the competition. Keep in mind that this is the currently legal version of the practice, in the past soft-drink manufactures would legally pay retailers not to carry competing products at all, and even after that was outlawed the practice continued leading to numerous legal cases.

So in the absence of regulation we have an example here of a practice that clearly violates business being governed by the laws of supply and demand, in which there is private interference and subsidy, and in which buyers are prevented from transacting freely with retailers by the suppliers. The practice of paying retailers not to carry competing products is, arguably, on-par with mafia practices. Certainly such practices don't meet Adam Smith's criteria of promoting the public interest.

So here we see a contradiction within the popular definition of a "free-market" as one in which there is both no government regulation and the market operates according to market principles. This is a case where regulation is required to prevent the undermining of market principles, which creates a direct contradiction in the term "free-market".

References:

COCA COLA COMPANY v. HARMAR BOTTLING COMPANY

LOUISA COCA-COLA BOTTLING v. PEPSI-COLA METROPOL

Monopolistic practices

There are of course many examples of outright monopolies throughout American history, but a well known company that currently engages in monopolistic practices and gets away with it is Ticketmaster. Ticketmaster's basic approach has been to pay venues for exclusive contracts to sell tickets for performances at those venues, then they tack on high service fees to the price of tickets for the performances. Given that there is no competition, and that under the terms of the agreements even the venues themselves can't sell tickets directly in many cases, there are no alternatives for consumers. In some cases the venues themselves do retain rights to sell tickets directly at the door, but since Ticketmaster is their only competition their fees typically match or are only slightly lower than Ticketmaster's fees.

Unlike Coca-Cola and Pepsi, however, which paid for exclusivity at given locations but were never able to completely dominate the national market, Ticketmaster's practices have led to a virtual monopoly on the national market. This is in part due to the nature of the industry that it is in. Whereas there are many outlets for the sale of soft-drinks, such as grocery stores, gas stations, vending machines, restaurants, movie theaters, etc., for which there are hundreds or thousands of such outlets in any given city, there are typically only a few major venues for live performances in any given state. Whole states typically have anywhere from one to ten venues where major live performances can be held and 10 to 50 moderate size locations, so its much easier for Ticketmaster to monopolize the market by obtaining exclusive contracts with a few hundred locations nation wide.

Exclusivity contracts can't possibly promote the interests of either consumers or performers. The entity which pays the most for an exclusivity contract, all else being equal, is the one that will get the contract with the venue, which means that they then have to charge the most in terms of additional fees to recoup the cost of the contract. Even if the additional fees lead to diminished ticket sales the venue and the ticket seller can still come out ahead, at the expense of the consumers and performers (or the performer's manager as the case may be), whose revenue comes from the base price of the tickets.

Live performances are forms of natural monopolies. There is a fixed supply of available "goods", i.e. "seats", at a single location. The price of admission for those seats will have zero impact on the supply of the goods. Adam Smith's argument for a "free-market" in which prices are allowed to rise was that the rising of the prices would lead to an equilibrium of supply and demand over time, such that fewer goods would be consumed during times of bounty if prices were elevated during times of bounty, and those excess goods would be saved for times of scarcity, leading to lower prices and more supply when harvests were low. Nothing of the sort can happen in terms of live performances. The supply of available "seats" at a given venue is always fixed, it can't go up or down. Inflating the price of tickets serves no market function, the only thing it does is increase profits, but with no resulting "public good" as Smith envisioned. And any possible "public good" that could even be argued for, such as perhaps allowing the extension of a tour by getting more venues to accept booking, would be entirely redistributive, unlike Smith's example. In Smith's example the same population of people "benefit" from the evening out of prices and supply over time. Generally the same people who would have paid more for the corn during a time of bounty would be paying less than they otherwise would during times of scarcity. However, even if charging more for tickets in Los Angeles led to additional venues signing onto a tour in Chicago, the effects would be totally redistributive, where the excess fees paid by consumers in Los Angeles would be benefiting consumers in Chicago, not themselves. Likewise, there is no ill effect from lower prices, as in Adam Smith's example either. In Smith's example the lower prices during times of bounty led to waste of resources that could have been saved for times of scarcity. In the case of live performances there is nothing to "waste". There is a maximum number of people who can purchase the good no matter what. All that lower prices do is shuffle who that group of people might be. If a venue of 5,000 seats will sell out at $10 a ticket and also at $50 a ticket, in either case 5,000 people are going to see the performance. At $50 a ticket it may be a somewhat different group of 5,000 people, but nothing of substance changes.

In the case of Ticketmaster the additional cost of tickets above what would be possible in the presence of ticket selling competition serves no market function in terms of supply and demand, and it has no benefit for the performers and their managers. The primary benefit goes to Ticketmaster, with a secondary benefit going to the venue, and the cost of those benefits are born by the consumers and performers, with no benefits. Ticketmaster is clearly a rent seeking agent of extortion.

It's impossible to argue that a company which pays to eliminate a market is interacting in a "free-market". Several lawsuits have been filed against Ticketmaster on anti-trust grounds, but they have all been dismissed or settled out of court. The most prominent lawsuit, brought by Pearl Jam in 1995, was inexplicably dropped by the Justice Department without any explanation before ever going to trial.

It is difficult to argue that if the government were to more strongly regulate the ticket sales industry to outlaw the practice of ticket sellers obtaining exclusive contracts with venues that this would make the market for live performance tickets "less free", yet at the same time this clearly indicates a contradiction in the definition of "free-market".

References:

U.S. Ends Ticketmaster Investigation

Wikipedia: Ticketmaster

The National Football League

The National Football League organization is technically a non-profit association which oversees franchises of individual owners. There are no corporations allowed in the National Football League according to the rules of the association. Taken as a whole, the franchises of the National Football League are the most successful and profitable of any sports league in the world. The National Football League association is a private governing body which enforces a strict structure of economic regulation upon the franchises and players, which is widely considered to be the foundation of the league's financial success. So does the economic model of the National Football League constitute a "free-market" simply because the regulations are being enforced by a private association instead of "the government"? Let's take a closer look.

The NFL classifies revenue into two basic categories, shared and non-shared revenue. The primary sources of shared revenue come from broadcasting contracts and merchandise licensing. All shared revenue is divided equally among all of the NFL franchises. Since this shared revenue is also the largest form of revenue in the league, this creates a relatively even playing field among the teams, allowing each team to be relatively competitive. Non-shared revenue comes primarily from stadium ticket sales. Not only is a majority of the revenue shared equality across all franchises in the league, but in addition salary caps and floors are set based on shared revenue, such that even if an individual franchise is able to generate higher non-shared revenue the salary caps and floors for that franchise are still the same as all other franchises in the league. Not only that, but stadiums themselves, the major source of non-shared revenue, are also highly regulated by the league with maximum seating caps.

In addition all NFL football players are members of the National Football League Player's Association, which is a union, under which all salaries and benefits for football players are negotiated. So there is virtually no aspect of the economy of the National Football League that operates according to market principles, the entire league is essentially a planned economy (in which corporations are forbidden), and yet it is largely due to this planned economy that the The National Football League has become the dominant sports league in the world.

So we get back to the central question, how does the definition of "free-market" as a market that is free from government regulation, where prices, exchanges, and compensation are determined by market forces, apply to the National Football League? Clearly the NFL is an example of a private organization in which prices, exchanges, and compensation are not determined by market forces at all, nor are they regulated by "the government".

References:

http://stateoftheshield.com/

http://www.shmoop.com/nfl-history/economy.html

http://mpra.ub.uni-muenchen.de/17920/1/MPRA_paper_17920.pdf

Discrimination

Another "free-market" paradox is the issue of discrimination. Because markets can essentially act as a "tyranny of the majority", market forces can lead to instances of exclusion of certain groups of individuals based on things like race, gender, religion, or any type of characteristic. In such a situation market forces would actually be the primary barriers to allowing certain individuals to "transact freely" and would prevent the determination of prices by supply and demand.

Contrary to the recent claims of the libertarian Congressman Rand Paul, racial segregation and discrimination in the treatment of customers in the South would not have been solved by "the free-market", indeed it was a product of market forces. Paul's basic claim was that due to the fact that business owners are driven by profit motive, they would have had an incentive to cater to African American customers in order to gain additional customers and thus increase their revenue, so even without government intervention "the market" would have eliminated segregation and discrimination against blacks in the South by business owners. Paul stated in interviews that businesses in the South which discriminated would have lost revenue due to having fewer customers since they would have been missing out on black customers and would have lost the business of "good" whites who would have boycotted their businesses.

The problem with Paul's claim, however, is that it completely misunderstands what was happening in the segregated South. Businesses weren't spurning black customers because the business owners were racists (though they may or may not have been), rather they were spurning black customers because their white customers were racists. You see, Paul's logic only works if the only reason that business owners discriminated against blacks was because of their own racism, but that's not the case. Business owners in the South discriminated against black customers due to market forces, in response to the demands from their more economically important white customers, thus, "the market" was driving the discrimination. A case in point was the famous Woolworths' policy of not serving blacks at white-only counters, which was a policy that the national chain only held in the South in order to "abide by local custom". After all, when president Eisenhower forcibly integrated Central High School in Little Rock, Arkansas in 1957 the National Guard had to be brought in to protect the black students from angry white mobs and to escort them inside the school.


Blacks and 2 whites being taunted by white mob during Greensboro sit-in at a Woolworths'


Elizabeth Eckford being heckled on the first day of the integration of Central High School

So here we see the contradiction. The definition of a "free-market" includes the idea that all buyers and sellers are allowed to "transact freely", but in this case we have an example where, absent regulations, the dominant market force (white customers) prevented blacks from being able to "transact freely". So, free from government restrictions, the private citizens imposed their own set of market restrictions. In this case the government restrictions (preventing white business owners from discriminating against blacks) actually brought about the condition of allowing all buyers and sellers to transact more freely, which, again, points to a paradox in the definition of "free-market".

What was happening in the South was that, for example, restaurants run by white owners either refused to serve blacks at all, or if they did they served them in separate dining areas, often with separate entrances and separate restrooms, etc. Obviously this put an additional burden on the restaurant owners because these separate facilities cost extra money and/or they lost black business as a result of these practices. However, the reason that they did this was because if they didn't they would lose almost all of their white customers. White patrons didn't want to sit next to blacks and eat dinner in a restaurant, they didn't want to use the same bathroom as blacks, they didn't want to drink from a glass that had been used by blacks, etc. so if any restaurant that relied on white business would have served blacks the same way that they served whites they would have lost most, if not all, of their white business, which would have been a much bigger economic loss than any gains in black business. The same went for almost any line of business: barber shops, clothing stores, grocery stores, movie theaters, etc., etc. Thus, "the market" is what was actually driving the segregation and discrimination by white-run businesses in the South.

Indeed one of the ironies of the situation was the fact that some white business owners actually wanted to serve blacks and, due to the very reasons cited by Paul, wanted black customers and didn't want the burden of having to provide separate facilities, but due to "market forces" they were unable to run their business the way that they wanted to because they were forced by financial necessity to cater to the tyranny of the white majority. So business owners were in fact being prevented from running their business the way that they wanted to by market forces.

Government regulation, in this case, "freed" business owners to run their businesses how they wanted to and allowed them to expand their customer base without fear of white customer retaliation; so not only did the government regulation banning racial segregation and discrimination by businesses allow blacks to transact more freely, but it also allowed business owners greater freedom in how they ran their own businesses. Paul's argument about how market forces would have "eventually" ended racial segregation in the South relies on an entirely fictional South in which the majority of people were already opposed to racial segregation, but that wasn't the reality. The reality was that the majority of people were in favor of racial segregation, and continuing to cater to the interests of racial segregation in the South wasn't only economically advantageous, it was an economic necessity. Businesses in predominately white areas that tried to serve blacks faced the prospect of going out of business due to market forces, because the overwhelming majority of white customers were racists and would not patronize businesses that didn't discriminate. Granted, the social protests that took place in the South during the late 1950s and early 1960s did make some progress in desegregating some individual locations, almost entirely in the "northern South", but when the Civil Rights Act of 1964 was passed segregation and discrimination was still widespread and firmly entrenched in the so-called "Deep South".

References:

THE GREENSBORO CHRONOLOGY

Erection of toll barriers

A final example of "free-market" contradiction is that of toll barriers. In an unregulated market there is nothing to prevent private individuals from erecting barriers to trade and then charging tolls for the transport of goods. This would, of course, be a purely rent seeking activity and in direct contradiction to the concept of "buyers and sellers" being allowed to "transact freely". Indeed this type of activity happens all the time if allowed. In medieval Japan the practice was so widespread in fact that one of the major reforms of Oda Nobunaga in the 16th century was the abolition of private toll barriers, which paved the way for the unification of Japan. This abolition of toll barriers is simultaneously considered a "free-market" reform while also being a regulation.

The toll barrier example is another good illustration of "free-market" paradoxes. If trade is flowing freely through a mountain pass, then a private individual acquires the land and erects a gate blocking the mountain pass and charges everyone a fee to pass through the gate, is this to be considered a "free-market" activity? Clearly this private individual is preventing individuals from "transacting freely" and the erection of the gate serves no economic purpose other than the enrichment of the individual who charges the toll while providing no benefits to anyone else. Everyone else was better off before the erection of the gate. Government regulation could prevent this, either by ensuring that the mountain pass remains public property, or by regulating the types of modifications that can be done to it by private individuals. In either case such government regulations would protect the ability of individuals to transact freely, thus advancing one of the major elements of a "free-market" through government regulation, which, according to the definition, would make it "anti-free-market".

These are just a few examples of inherent contradictions in the concept of a "free-market". This is why there is no such thing as a true "free-market", and can never be. The principles of a free-market are at odds with one another. Allowing "buyers and sellers to transact freely" in many cases requires government regulation. The whole concept of a "free-market" is paradoxical, illogical, meaningless, and in fact harmful in that it often confuses the conversation and is used to intentionally mislead.

That doesn't mean that the point that Adam Smith was trying to make wasn't valid, it just means that the way that the term "free-market" has been subsequently defined and is widely used as an axiom is not valid. If we go back to what Adam Smith was actually saying and look at the context, what we see is that Adam Smith is actually saying that prices should be determined by knowledgeable individuals with a vested interest as opposed to static rules set by lawmakers who don't have in-depth knowledge of the market. But these same guiding principles wouldn't necessarily preclude price-setting by government agencies. The objective laid out by Smith in the example was to stabilize prices over time by allowing market prices in the present to better reflect predicted future conditions. Smith says that the best way to do this is to let those who know the most about future conditions influence prices to the full extent of their knowledge. At the time that Smith was writing governments didn't engage in things like monitoring crop conditions and making crop forecasts or aggregating and analyzing data, etc. In this situation clearly merchants and farmers on the ground had a better assessment of future conditions than individuals in government, and could certainly have done better than static rules which didn't even take changing conditions into account. But that's not necessarily true today.

For example, today in the United States the U.S. Department of Agriculture engages in crop forecasting that is among the most accurate economic forecasting in the world. This is because the USDA aggregates data from farmers all over the country, and has inspectors that go out on-site all over the country to inspect conditions and get input from farmers. They then use all of this data to make forecasts of future yields. Given that these forecasts are arguably the most accurate picture that we have of future crop production, by Adam Smith's logic the USDA should be setting prices for farm commodities. Adam Smith's argument wasn't really an argument for "the wisdom of markets", it was really an argument for the wisdom of technocracy, set in a time when governments were not technocratic. The point here is not to advocate government price setting, the point here is to point out that Adam Smith's argument was very specific and applied to a certain set of conditions, with a defined purpose, it wasn't a broad principle, as the "free-market" concept is often applied today.

As a broad principle the term "free-market" is a contradictory theoretical term that has all kinds of inherent inconsistencies and undesired implications. The purpose of Adam Smith's invocation of "free-markets" was clearly to better align the prices of commodities with current and future supply and demand. So the objective is simply to align prices with supply and demand fundamentals, the objective is not to have "free-markets" for "free-markets' sake".

While the term "free-market" is ill-defined and inherently problematic, what we can talk about are regulated markets, unregulated markets, degrees of regulation, and pro-market vs. anti-market regulations.

Capitalists do not like unregulated markets

One of the biggest misconceptions about capitalism is the notion that "capitalists" themselves, i.e. actual capital owners, like "free-markets". This misconception stems from the historical development of capitalism and the opposition of merchants and manufacturers to the economic controls of feudal aristocracies during the 18th and 19th centuries. The economies of Europe had been tightly controlled under the feudal system for centuries, and were still dominated by mercantilist policies at the onset of the industrial revolution. The works of Adam Smith and other classical economists were in clear opposition to the feudal system and mercantilist policy, as was the new capitalist class that rose to power during the industrial revolution, who sought to overthrow the control of the ruling aristocracies. As such, businessmen during this time certainly opposed the set of rules and regulations put in place by the feudal aristocracies, as those rules and regulations were designed to protect the interests of the aristocracies. In other words, the capitalists of the 18th and 19th century sought markets that were "free from" the regulations and barriers that protected the interests of the establishment, because at that time they were not a part of the establishment, in fact they were a threat to it.

But here is the issue: Once so-called "democracy" and "free-market capitalism" had overthrown the feudal aristocracies and put an end mercantilist policy, the capitalists themselves had become a part of the establishment; they were now the ruling class, and as such they now sought the implementation of their own set of rules and regulations to protect their interests. This issue is actually very similar to the issue of "States' Rights". "States' Rights" is held out by many so-called conservatives as a cherished principle in and of itself, but principles such as "States' Rights" and "free-markets" are seldom really appealed to on principled grounds, they are actually appealed to only when it suits the position of those making the appeal under a specific set of conditions.

For example, the proponents of slavery appealed to "States' Rights" when it became clear that the national mood was trending against slavery and the advocates of slavery began losing power in the federal government. Yet, had the pro-slavery position been the dominant position in the federal government there is no doubt that the advocates of slavery would have appealed to federal power to entrench slavery. Today we have conservatives who appeal to the "States' Rights" argument on any number of issues, from abortion, to immigration policy, to education, to Social Security, when existing federal laws maintain a position that they are opposed to, yet when those same people get into power in the federal government they immediately advocate for federal laws to enforce their views on the whole country.

The issue is the same with capitalists and "free-markets". Capital owners advocate for "free-markets" when there are existing rules or restrictions that are against their interests, but they also do everything in their power to enact rules and restrictions that benefit their interests. They don't actually adhere to "free-markets" as a principle, they just appeal to "free-markets" when it suits them, and it really couldn't be any other way. It would be against the self-interest of a capital owner to accept a "free-market" if competition and market forces would result in losses for them. Thus, when faced with unfavorable market conditions all capital owners are compelled to advocate for regulations to protect their interests, and they do so. The most fundamental example of this from the 18th and 19th centuries was in the area of patent and copyright law. Patents and copyrights are of course regulations, which put limits on markets and create monopoly rights thereby inherently restricting supply. The strengthening of patent and copyright law was instrumental to the rise of capitalism and to the fortunes of a great many individuals. This again points to the inherent contradictions within the "free-market" concept. In a true purely "free-market" there would be no patents or copyrights, these are clearly forms of regulations that capitalists, and indeed non-capital owners, want, at least in certain forms. Patent and copyright law, at least initially, was extremely beneficial to non-capital owning individuals who had good ideas. They helped to ensure, in many cases, that "the little guy" was able to get the just rewards for their contributions. Indeed patents and copyrights were among the most common ways that poor and working-class individuals were able to become wealthy during the 18th, 19th, and even early 20th century. Patents helped to ensure (though of course they never guaranteed) that a poor worker with a good idea could benefit from his or her innovations without their boss or some other established businessperson profiting from it without giving any compensation to the inventor. Today, however, the overwhelming majority of patents are filed by corporations, not individuals, and patent and copyright law is more likely to be used by established capital owners against the "little guy" upstarts than the other way around.

Furthermore, the fact that capital owners support many regulations isn't necessarily a bad thing, in fact in many cases its a very good thing. The point, however, is that the notion that unregulated markets are always preferred by capitalists is completely false. Let's look at some scenarios where capitalists seek out market regulation.

When bad actors can undermine whole industries

This is a scenario where the actions of a single or small number of individuals or corporations can undermine an entire industry, causing losses or reducing gains for other "good actors". This scenario is especially common in the food industry, where food-born illnesses can lead to recalls or customer shunning of whole categories of food, even though only one or two manufacturers may have been the cause of the food born illnesses. A recent example of this is the listeria outbreak among cantaloupe, which resulted in over 20 deaths and one hundred reported illnesses, all of which originated from a single farm in Colorado. Even though all of the tainted cantaloupe came from a single farm, it resulted in all cantaloupe from all farms being pulled from shelves and consumers shunning any and all cantaloupe, and its likely that this will have impacts on consumer behavior for at least one or two years; so while one farm was the source of the problem, the economic impact was felt by all cantaloupe farmers.

In these situations "responsible" businesses who have a lot of capital at risk don't want their businesses to be undermined by irresponsible or fly-by-night businesses, so they develop best practices and/or standards and then appeal to government to enforce those practices or standards upon the whole industry. The classic example is H.J. Heinz's support of the Pure Food and Drug Act of 1906. Indeed Heinz's support for this act was so instrumental that the Heinz food company touts the company's role in the passage of the act and its continuing role in government food regulation on its website under the section Pure Food. Pure Good.

The passage of the Pure Food and Drug Act of 1906 resulted in a massive number of food processors going out of business due to inability to comply with the regulations. This clearly had a beneficial impact for Heinz, as it eliminated competition and increased the company's market share, however, it's difficult to argue that forcing companies out of business that were unable to comply with food safety regulations was a bad thing. The reason that the legislation was even brought up was because it had the mutual support of consumers, public health experts, and industry leaders like Heinz. This was because at that time there were no regulations at all dealing with food labeling or industrial food safety, and there were thousands of different food processing companies engaged in things like canning and jarring at this time. There were no standards; companies could put whatever they wanted on their labels, they often made completely unsupported medical claims, describing things like cocaine as a health food, and not all of them used sanitary practices. But due to labeling practices, etc. even knowing which company's food you were buying wasn't always clear. Even if you could tell, at this time things were in such flux that there was little in the way of established brands that you could rely on being in the store on a regular basis, etc. Likewise, it was impossible for responsible food processors to compete on price with irresponsible food processors, which enabled irresponsible food processors to hang on to market share since their products were safe enough that people didn't get sick from them every time.

As a result, what was happening was that unsanitary food processors were ruining the reputation of processed food for everyone. When someone would try canned food and then get sick from it, it would turn them off to all canned food, etc. So Heinz and other industry leaders who did practice quality control appealed to the government to enforce their type of quality control practices on everyone, so that consumers would feel safe buying processed foods so that the industry as a whole would benefit. If, in the process, some food processors couldn't afford to adopted the practices necessary to meet the standards, well, that was just too bad.

The act of regulation was a benefit to specific capitalists and a detriment to others. For those capitalists who benefited, they had succeeded in getting the cost of quality control for their industry to be paid for by taxpayers and in erecting barriers to entry for any would-be upstart competitors. While Heinz benefited from the regulations. The enforcement of the regulations, which the corporation benefited from, cost Heinz nothing other than the lobbying efforts on the bill's behalf. The cost of inspecting individual food processors and enforcing the rules fell onto the government, and was paid for by the entire tax paying population. This is another reason why capitalists like government regulation, because it almost always is a means of externalizing costs for those businesses that benefit.

When short-term incentives run counter to long-term interests

The obvious example here is the recent housing bubble. We've already gone through the details of the housing bubble, but the thing to focus on here is the fact that many individual mortgage brokers and loan originators did not want to give loans to people that were obviously unqualified, and indeed they knew that originating loans for unqualified people who were likely to default on their loans was bad for the industry and was in fact bad for their own business. However, the problem was that due to market competition, if a mortgage broker or loan originator didn't issue loans to individuals that other mortgage brokers were willing to, then they would lose business, and in the short term market competition would drive "responsible" brokers out of business. You can hear more about this here: The Giant Pool of Money.

So, the lack of regulation meant that mortgage brokers were forced to engage in business that they knew was going to eventually sabotage their own business down the road because if they didn't do it at the time they would go out of business immediately. Had regulations been in place at the time to prevent the type of irresponsible lending that was taking place a lot of mortgage brokers that have gone bankrupt over the past few years would still be in business today.

This type of scenario is prevalent throughout the economy, and in some cases the capital owners are smart enough and disciplined enough to recognize the problem and they appeal for regulation to prevent markets from being driven by short-term interests that run counter to their long-term interests. And again, such regulations are counter to the interests of those who are pursuing only short-term gains, so they are against the interests of some capital owners, but in the interests of other capital owners, and these types of regulations get passed when the capitalists with long-term interests are more powerful than those seeking short-term gain.

When businesses interact with and depend on other businesses

Many regulations in place are the result of capital owners in some industry seeking regulation of other industries that they depend on, for example grocery store owners' support of food safety and labeling regulation for food processors and farmers. Virtually all businesses support certain regulation of banks in various ways. Property owners support regulation of the construction industry. Auto manufacturers support regulation of the petroleum industry, etc., etc.

A recent example of this has played out in the battle over debit card usage fees. The regulation of debit card swipe fees, as with most regulation, is typically presented as a case of "government vs. business", but really the government is just a tool, which does the bidding of whomever the most powerful groups in society are. Sure there may be some cases where individual legislators or elected officials take a stand against some powerful interest group in favor of doing "what's right", but generally speaking any government is merely an instrument being wielded by interest groups. In the case of debit card fees, the primary interest groups are the banks and retailers, capitalists vs. capitalists, with both sides appealing to the government to write the rules of the game in their favor.

As it turns out the retailers won this legislative battle with the addition of the Durbin Amendment to the Dodd–Frank Wall Street Reform and Consumer Protection Act, which requires that large banks only charge debit card processing fees to retailers that are "reasonable and proportional to the actual cost". This legislation was lobbied hard for by retailers like Wal-Mart and Target, among others, including the Small Business Association. The retailers, which have a long history of legal actions against MasterCard and Visa, argued that the duopoly power of MasterCard and Visa, which combined process virtually 100% of debit cards in the United States, enabled them to charge inflated processing fees due to lack of competition. With the passing of the legislation restricting what the processors could charge retailers for processing fees, the processors, Visa and MasterCard, increased the fees that they charged to banks, which some banks have then passed on to consumers in the form of new fees for checking accounts or debit card use.

See:
Banks vs. Wal-Mart: Round Two
The Big Retailers Versus the Big Banks: It Makes a Big Difference

When capital owners seek additional control over capital controllers

In many cases capital owners seek additional control and legal power over those whom they entrust with the administration of their capital; essentially investors seeking additional control over corporate boards and executives. Investors have the ability to sue corporations and their executives and officers. Such suits are typically brought by large institutional investors, such as pension or mutual funds. In rare cases they are also brought as class action suits by individual investors.

However, investors obviously seek ways to prevent the need for lawsuits in the first place as well, via regulations. The most common of these types of regulations are regulations requiring corporate transparency. Publicly traded companies, i.e. companies which openly sell shares of their stock via official markets, have many reporting requirements. They are required to submit an annual report to share holders and to submit a Form-10K to the Securities and Exchange Commission. The most recent major regulation put in place on behalf of investors is the aforementioned Sarbanes–Oxley Act of 2002, which increases transparency of corporate accounting.

The regulatory oversight of the SEC is essentially a subsidy to investors, who benefit from the (albeit inadequate) tax-payer funded oversight provided by the government agency.

Licensing and certification

Milton Friedman famously campaigned against licensing and certification, especially in the medical profession. Friedman mostly considered licensing and certification in relation to workers, not capital owners. Friedman, somewhat correctly, regarded the American Medical Association as a union, and viewed government required licenses or certifications as a means by which laborers could restrict markets and thereby reduce the supply of laborers in their profession, thus enabling them to charge higher prices for their labor, and that as such licensing and certification was against the public interest.

One of the things that Friedman fails to address in his arguments against licensing is that one of the strongest reasons for licensing is the ability to revoke a license. As Friedman notes, it is true that having a license is no guarantee of quality, but when an entity or individual fails to meet the quality standards of a licensing program the ability to revoke their license to prevent future harm is very powerful, and even the threat of that force can serve to keep practitioners on "the straight and narrow". Sure it's obviously not 100% effective, but there is no doubt that it has an impact.

Nevertheless, much of what Friedman says is true regarding the impact of licensing on prices, but that doesn't necessarily mean that licensing isn't still in the public interest. I'm not going to address Friedman's arguments against licensing, since that is not the point here. The point here is that while Friedman's opposition to licensing and certification focused primarily on skilled professionals, i.e. laborers, licensing and certification applies to corporations and capitalists as well, in multiple ways.

For one thing licensing doesn't just apply to individuals, there are forms of licensing and certification that apply to corporations themselves, an obvious example being liquor licenses, which are held by businesses. The other issue is that many of the individuals most highly involved in the ownership and allocation of capital are licensed professionals, for example stock brokers, bankers, and financial advisors have to be licensed.

So here is the issue. Milton Friedman makes all kinds of arguments as to why professionals support licensing and use it as a means to manipulate the market in their favor, and all of his arguments in that regard are true, but these arguments apply just as equally to businesses and those who are entrusted with the holding and allocation of capital. The point here is that whatever the merits or demerits of licensing may be, it is something that capitalists themselves want, in part for the very reasons that Friedman is against it. Licensing programs exist because capitalists want them to exist.

Tariffs

The very first source of revenue for the federal government of the United States of America was tariffs. The predecessor to what would become the U.S. Coast Guard was originally established primarily to police the waters to enforce customs duties and facilitate the enforcement of tariffs. Indeed tariffs remained the main source of revenue for the United States until the establishment of the income tax in 1913. The interesting thing about tariffs is that they are another form of economic regulation that pits capitalists against one another. Capitalists decry or support tariffs depending on their impact or potential impact upon their business. Tariffs, of course, are essentially fees, or a tax, levied on goods when they are imported into a country. Now the interesting thing about the United States and tariffs is that the United States pursued a strongly protectionist trade policy up until the 1940s, and a moderately protectionist trade policy up until the 1980s.

The pattern of protectionism during the phase of economic development and then later trade liberalization as the economy matured is a familiar pattern that many countries follow. Tariffs and protectionism encourage domestic production. High tariffs on imported finished goods encouraged the development of capital within the United States by encouraging manufacturers to relocate to the United States in order to setup manufacturing here in order to be able to sell to the American market without paying the tariffs. This policy in fact worked and is credited with accelerating the pace of American industrialization, much to Britain's chagrin. In fact the British had laws against the export of industrial technology to developing nations and against the immigration of knowledgeable individual to America. Nevertheless, such individuals and technology made their way to America despite the British restrictions because the opportunities were too great. But while the United States of America levied tariffs on goods imported to America, the British had a more open trade policy as international British companies sought to bring goods back into Britain from around the world.

Tariffs were almost universally supported in America by all capitalists until the later part of the 20th century, at which time America's maturing corporations sought to go international and be able to import foreign made goods back into the United States.

Subsidies

Last but not least I'll mention subsidies. American capitalists have long been highly subsidized. Indeed Alexander Hamilton himself laid the groundwork for the public subsidy of American private business. Alexander Hamilton had a profound impact on the economic policies of the early United States, with influences that last to this day. Hamilton's fundamental strategy for strengthening the American economy was the use of tariffs on imported goods and the use of a portion of tariff revenue to subsidize domestic manufacturing. This was laid out in Hamilton's Report on Manufacturers and these policies were officially adopted by the federal government. The issue of subsidizing industry was one of the central points of division between Thomas Jefferson and Alexander Hamilton, who famously feuded with each other on many topics.

Hamilton argued that the subsidies were essential to lure capital away from developed nations like Britain and France, while Jefferson argued that the subsidies would lead to corruption and bring about undo influence of private industry on the government. Jefferson also argued that the subsiding of manufacturing would lead to concentration of capital ownership and a disenfranchisement of the agricultural South. Jefferson believed that an economy based on the widespread distribution of land ownership was essential to democracy and that concentration of economic power within corporations would lead to political corruption and the establishment of an American aristocracy that would undermine democracy. Hamilton, on the other hand argued that a diverse economy with a mix of agriculture, manufacturing and finance would be more robust and lead to a more economically independent nation. Hamilton saw the "free trade" policies of the British empire as merely a tool of imperialism and thus argued that in order for America to truly break free from British imperialism the United States would have to adopt economic protectionism and subsidize its manufacturers to gain independence from imperial European powers who had a more advanced manufacturing base and more robust financial systems.

In reality both were right of course. Hamilton's views were right, but Jefferson's were as well, and the corruption that Jefferson foresaw has plagued the country since the first days of its founding.

Hamilton also oversaw the establishment of a highly subsidized industrial center in New Jersey through the Society for Establishing Useful Manufactures. The Society had tried and failed on its own to establish a manufacturing center and so they appealed to Hamilton to subsidize their efforts through various tax breaks, grants and rights-of-way. Hamilton agreed and the industrial center was a success in the sense that a significant manufacturing base did grow there and the center became highly profitable. The area remained a major manufacturing site up to the 20th century, when the waterfall driven power became obsolete.

The subsidizing of business remained prevalent throughout America history. The railroad corporations of the later 19th century are a classic case of the subsidization of capitalists by government. Arguably the government subsidies for American railroad barons did increase the pace of industrialization and accelerate the expansion of the nation and the growth of the American population and the economy as a whole, yet at the same time the interactions between the government and the railroad barons were rife with corruption and untold billions of dollars were wasted and misallocated and a significant portion of the wealth of the railroad barons was a product of public subsidy.

Today well over $100 billion a year in direct subsidies are provided to American capitalists through local, state, and federal government; this doesn't count things like the 401(k) tax code which itself is essentially a subsidy for institutional investment companies or the fact that capital gains are taxed at a lower rate than wage income. This also doesn't count the emergency loans of over $1.6 trillion provided to financial institutions at below market rates and very favorable conditions by the Federal Reserve and TARP programs.

The point here is this: capitalists will always welcome subsidies, and in any political system where political power is a function of wealth, as most systems are, those with the wealth, i.e. capitalists, have the power to appeal to government for subsidies, and they currently do so in the United States to great effect. Capitalists have basically two forms of leverage that they use to appeal for subsidies. The first is the most obvious, their wealth, which they can use to "lobby" (essentially bribe) politicians who control taxpayer funds to subsidize them. In essence government officials are gatekeepers, so spending a few dollars "lobbying" them can result in them providing access to many more dollars, since the money that the government officials preside over isn't theirs, it costs government officials nothing to provide subsidies to capitalists. If capitalists spend a few million of dollars a year "lobbying" government officials this can result in hundreds of millions or billions of dollars in subsidies. The second form of leverage is perhaps even more important however, and that leverage is capital ownership itself. Concentration of capital ownership means that a relatively few individuals have control over vast amounts of capital, and this control itself can be used, and is used, as a form of threat. Capitalists threaten to lay off workers or move their capital or do this or do that with their private capital if politicians don't give into X, Y, or Z demand, and this is very powerful.

This is what drives many of the state and local subsidies for corporations in America, competition between localities to attract capital results in subsidizing capital. With increasing globalization the same has become increasingly true at a national level in America and around the world, with national governments around the world essentially competing to out-subsidize capital in order to attract it. Since capital owners are relatively few and capital is so important for economic productivity, this puts capital owners in a powerful position, a position which only grows increasingly more powerful as capital ownership is consolidated. The result is increasing pressure to subsidize capital owners via resources acquired from non-capital owners, i.e. taxation of workers and small capital owners to provide the revenues needed to subsidize large and powerful capital owners.

The types of regulations cited above are only a small sampling of the types of  regulations favored by capitalists. Overall what we can plainly see is that while people often associate "free-markets" with capitalism and claim that capitalists don't like regulations, this is in fact not true. Specific capitalists don't like specific regulations, but they clearly favor many other regulations. In some cases regulations that capitalists approve of are generally beneficial to consumers and society as a whole, in other cases they are not. Many economic regulations actually pit capitalists against capitalists, they are put into place at the request of some capitalists because they benefit them, while the same regulations may harm other capitalists. Governments enacting such regulations are merely acting at the request of capitalists themselves; government regulation is merely a tool used by capitalists to peruse their interests and increase profits in capitalist societies.

Capitalists do not like efficient markets

It's not just that capitalists don't like unregulated markets, they don't like efficient markets either. In fact, capitalism could not exist in a so-called "perfect market", which is a theoretical 100% efficient market. This is extremely important to understand, because it gets at the heart of the misconceptions about capitalism and the motives of capitalists.

When Adam Smith wrote The Wealth of Nations over 200 years ago the economic systems of Europe were highly inefficient. Smith observed that within an inefficient system businesses could be driven by profit motive to increase efficiencies, i.e. that by better serving the public interest a business would yield higher profits. This is true, but Smith was really only dealing with one half of the equation. The reality is that the rate of profit is determined by the relationship between business efficiency and market efficiency. It's true that a business' profits can be increased by increasing the efficiency of the business, or rather by the business more efficiently meeting market demands than competitors, but that is only one way that profits can rise; profits can also rise if the market itself becomes less efficient. Profits are just as much a measure of market inefficiency as they are of business efficiency. The best way to understand this is simply to look at it in terms of a mathematical formula, as stated below.

Technically, business efficiency over market efficiency gives us the profit factor, and by subtracting 1 and multiplying by 100 we get a true "rate of profit". We can consider "efficiency" to be a value that ranges from 1 to 100, with 100 being either a perfectly efficient business or a perfectly efficient market. What we see is that it doesn't require a perfect market in order for businesses to be unprofitable; if a business is less efficient than the market then the business will be unprofitable.

Let's look at the examples above. If "the market" has an efficiency of 20 and a business has an efficiency of 25, then every dollar invested will yield one dollar and twenty five cents, the rate of profit is 25%. If the business is equally as efficient as the market then for every dollar invested the business will yield one dollar in return, in other words, the actual rate of profit is zero. If the business is less efficient than the market then for every dollar invested less than a dollar is returned, so the actual rate of profit is negative. So basically, a business has to be more efficient than the market in order to be profitable, and this is true whether the market is highly inefficient or not, but the fact is that as markets become more efficient it of course becomes increasingly difficult to be more efficient than the market, and under the condition of a theoretical "perfect market" it would be impossible for any business to be more efficient than the market. Thus under perfect market conditions profits would always be zero or less than zero, hence it has long been predicted that profits would decline in capitalist economies as they matured under the assumption that markets would become increasingly efficient.

But the point is that perfect market conditions don't have to be reached in order for profits to fall to zero or less than zero, business efficiency just has to be less than or equal to market efficiency, even if market efficiency is only 10 or 20, or whatever.

So let's look at the qualities of an efficient market. Market theory is predicated on the assumption that markets work efficiently when individuals are well informed, make rational decisions, act in their self-interest, there are low barriers to entry, no individual has the power to set prices, everyone has access to technology, and there are no externalities. A perfect market then, a market with 100% efficiency, has the following qualities:

  • Every individual is omnipotent (knows everything about the market)
  • Every individual makes only rational decisions
  • There are no barriers to entry
  • All prices are determined by individuals engaged in specific transactions (no price fixing)
  • Every individual has equal access to technology
  • There are no externalities

So basically, while those theoretical conditions may never be fully met, as markets come closer to those conditions it becomes increasingly difficult for businesses to maintain their rate of profit. In mathematical terms, as markets come closer to those conditions their "market efficiency score" (the denominator in the profit equation) gets closer to 100.

Let's use a concrete example. Let's say that widgets can be produced and brought to market for $10 each, meaning that when every single cost is factored in, the cost of the machines, the materials, the labor, the facilities, the shipping, etc., the cost is a total of $10. Now let's say that these widgets can be sold for $15. In that case there is a profit of $5 per widget.

What market theory says is that competition will drive profits down in an efficient market. In an efficient market someone will begin producing those widgets for the same $10 and selling them for $14 each, netting $4 in profits, then someone will come in and sell them for $13, then $12, then $11, etc. Now, in order to maintain higher profits two things can be done: either the cost of producing a widget can be reduced or market efficiency can be reduced. If the cost of producing the widget is reduced then that can temporarily yield profits, but market forces would again result in the same downward pressure on prices, driving the profit margin back down toward zero. This is considered one of the positive aspects of a capitalist market system. Profit motive provides an incentive to find more efficient, i.e. less costly, ways to produce commodities and it provides an incentive to accept lower profits per unit in order to undercut competition, thus bringing the price of commodities closer to the cost of production.

However, instead of developing better widgets or a more efficient way to produce widgets, to get higher profits a capitalist could externalize part of the cost by doing something like dumping all of the waste from the process into a river instead of paying to dispose of the waste responsibly. In addition to that, instead of trying to lower the cost of production a capitalist could engage in a marketing campaign that costs about 20 cents per widget which portrays the widgets as cooler than other widgets, thus enabling them to be sold for $13 even while others drive the price of comparable widgets down to $11. In an efficient market there are low barriers to entry, so another thing that a capitalist might do to maintain profits is appeal to the government to require licensing for widget makers or appeal to the government for new safety standards for widgets that his widgets are already designed to meet, etc. Several widget makers may get together and agree to fix prices, i.e. agree that none of them would sell their widgets below $13 in order to maintain a profit margin. Patents can be used to maintain a monopoly on technology, and on and on.

Since it becomes increasingly difficult for businesses to get a profit as markets become more efficient, capitalists are driven by profit motive to undermine market efficiencies. It is also important to remember that in a capitalist economy "the market" has two underlying components, the commodity market and the labor market, so capitalists seek to reduce efficiency in both of these markets. They do this through secrecy or spreading misinformation, by encouraging emotional decision making among others, by erecting barriers to entry both privately and through government regulation, by attempting to collude and/or use government regulation to set prices, by using patents, by using contracts to reduce access to technology, and by using the legal system and public subsidies to engineer the externalization of costs.

The reason that profits have not actually declined in capitalist economies around the world is that capitalists have continuously undermined market efficiencies using these mechanisms and others in order to maintain profits. These actions have become so ingrained in our culture that many Americans now accept many of them as "normal". Let's look at some ways that capitalists do some of these things.

Fostering irrational decision making

There is advertising and then there is marketing. Advertising is arguably simply making the public aware of the products or services that a business or other entity offers, while marketing is arguably the attempt to actively encourage individuals to purchase or consume those goods and services. The reality is that marketing is all about encouraging irrational decision making. The whole objective of marketing is to get people to make irrational decisions, to get people to become emotionally attached to brands and commodities, or as Kern Lewis, director of marketing for CMG Financial Services, put it in Forbes magazine, "[to] inspire 'loyalty beyond reason.'" This isn't to say that marketing only involves emotional appeals, clearly objective facts are sometimes used in marketing, but the function of marketing is to encourage consumption and brand loyalty beyond reason.

This topic can easily be a book in itself, and many, many books have been written on the topic, so I'm not going to go into too much detail here, but a quick internet search brings up plenty of reading material: Google: marketing emotional appeal.

Emotional appeals aren't only a component of the marketing of commodities however, they are also a component of human resource management as well, i.e. the labor market component of the equation. Employers do this through appeals to loyalty among employees and by engaging employees in various activates from pep-rallies to "team building" to "employee appreciation" parties to involvement in charity events. While some of these actives may seem benign or even good, the reality is that employers engage in them because they appeal to emotions to encourage irrational acceptance of lower compensation by workers. Wal-Mart famously refers to their employees as associates, which implies that the employees are partners, though they clearly aren't. This is another example of ways in which employers try to make employees feel more enfranchised than they actually are.

At an even broader level, the entire American mass media fosters irrational consumerism. All major media outlets, especially those involved in television, are dominated by for-profit corporations, and in many cases the media companies are owned by larger corporations, for example NBC and MSNBC are owned by General Electric. This isn't to say that those in charge of running these companies have explicit goals of fostering irrationality, but anti-consumerist programming or programming that takes a critical look at the corporate structure of the economy, or programming that fosters a rational and objective understanding of society and the economy, or of economic choices, is clearly not going to be produced and aired by the very corporations upon which the critical eye would be turned. Wacky teen dramas with kids doing crazy things and toting around a lot of accessories are great in the eye of the corporate media executive, teen programming that portrays teens being responsible, shunning materialism, and critically evaluating the problems in society, not so much... At this point all corporate produced media is a massive marketing campaign for the status quo of American life, of a status quo rooted in irrational consumerism and a capitalist economy. Virtually all corporate produced media reinforces this worldview.

The fact is, though, that market theory is predicated on rational decision making, and the whole objective of market theory is supposedly to facilitate the efficient use of resources. The claim (whether true or not) that markets are the most efficient way to allocate resources is a primary defense of market systems, yet one of the most fundamental pillars of market theory, that individuals act in their rational self-interest, is a primary target for being undermined by capitalists. It is a primary target precisely because profits can be increased when consumers and wage-laborers don't make rational decisions; which is why capital owners encourage irrational decision making, thereby undermining market efficiency and increasing the rate of profit. But increasing the rate of profit in the short-term isn't the only effect of this corporate fostered irrationality, because irrational worldviews and irrational decision making affect all aspects of society. It undermines our democracy and it undermines the overall economy as a whole long-term because irrational individuals are generally also less productive individuals, at least less capable of developing scientific advances, new technology, and processes which can lead to increased productivity in the future. Thus, capitalists themselves, in the pursuit of profit motive in the present, undermine the society's ability to develop economically in the future. Capitalists have the competing interests of wanting consumers to be irrational, ignorant and self-absorbed, and wanting workers who are rational, well educated, and capable in order to be productive (but still ignorant of labor rights and the value of their own labor).

Secrecy, lack of information, and misinformation

Secrecy and misinformation are integral to profit generation within capitalist systems. Protecting information is seen as a normal part of business operation for maintaining a competitive advantage in the market place, and indeed that is exactly what it can do. Secrecy reduces market efficiency, which helps to boost profits. Secrecy is used in many ways, from protecting trade secrets to hiding financial information from competitors to hiding illegal activity to hiding employee compensation from other employees to providing one-sided information on products to consumers.

Going back to marketing, when a for-profit entity attempts to sell a good or service, they don't lay out an objective body of information with the pros and cons of the product; they provide a highly biased one-sided set of claims. All marketing is essentially corporate propaganda. Thanks to the internet it is becoming easier for consumers to find more objective reviews of products, but even then in some cases corporations pay to have fake reviews posted on product review sites.

There are really too many ways that secrecy, lack of information and misinformation are used to protect profits within capitalist systems to go over them all here, but let's simply use the commercial food industry as an example. Despite the fact that there are now many regulations forcing food processors and packagers to disclose ingredients and nutritional information on products, consumers still lack a lot of vital information about food products when making purchasing decisions. For example, how different would individuals' food buying choices be if people were able to watch all of the food that they buy being made at the time of purchase? Obviously there would be many challenges to this in a modern economy, but the food industry benefits from this lack of information and to some degree the way that modern food supply systems function has been engineered to purposely remove consumers from the food production setting.

For example, meats are packaged in grocery stores in America in such a way as to essentially remove them as much as possible from the context of their animal origins. A century ago all meat was acquired at a butcher or on a farm, where people witnessed the treatment of the animals and the handling and processing of the meat. Today people simply buy fully butchered meat in shrink-wrapped packages where the realization that it even comes from a once living and breathing animal is no longer obvious. How was the animal treated? How was the animal slaughtered? Was the animal diseased? How was the meat processed? Did it get dropped on the floor? Did someone sneeze on it? Did it lie out with flies all over it? Did the butcher wash their hands? The consumer will never know, and what's more the seller is not only happy that they'll never know, they want to remove the product from the context of those concerns as much as possible altogether.

Would a consumer really opt for the cheapest package of chicken breasts as opposed to the one that is 50 cents more if they had "full knowledge of the market", i.e. if they knew every aspect of how the animals were treated, slaughtered, and processed by the different sellers? Would they even buy chicken at all if they knew all of that stuff or would they opt for a vegetarian alternative? We haven't even gotten into sausage making and hotdogs. And don't think that the lack of information about food processing is simply a matter of circumstance either. The fact is that animal farming, slaughtering, and processing is one of the most secretive industries in America, virtually on par with the military industry. Our entire food supply chain is owned and controlled by private for-profit businesses, and as private businesses they have a tremendous amount of control over what the public is allowed to know about their processes and products.

Not only is it already very difficult for the public to learn anything about how the meat we consume is raised and processed, even with very considerable effort, but there is an on-going effort to further restrict public access to information about animal farming and processing. Several states are working on legislation, at the behest of the livestock industry, to ban and criminalize the documenting of conditions on farms without the consent of the farm owner. Support for this type of legislation has grown among farmers after multiple incidents where individuals, either workers or otherwise, have recorded animal cruelty and sanitation violations on farms. In this case farmers are essentially trying to make it illegal to document illegal or publicly frowned upon activity at their operations. The legislation is designed to target whistleblowers working on the farms as well as activists and outside journalists.

The level of secrecy surrounding American livestock farming and processing is truly astounding when you are fully aware of it.  This segment from a PBS investigative report in 2006 states that they were the first journalists ever allowed to film inside the largest pork processing plant in the country.

Now think about that. The largest processing plant for pork in the whole country, which at the time was slaughtering and processing 33,000 hogs a day, had essentially no public access, and even the access that was granted to PBS was very limited. We are talking about a facility that produces what we eat, arguably the most fundamental requirement of life. This is a product that goes into our bodies and we aren't even allowed to see how it's made!

A recent investigation into several diseases at a Hormel hog processing facility revealed that at that facility the pig's guts and brains go down onto the floor where they slide into a drain through which they are caught in vats underneath the floor. The material is later used for food products. The pig's brains are blown out using an air compressor, after which they slide down through the drain in the floor. In this case workers in the "brain blasting" area developed a rare autoimmune disease due to inhaling vaporized pig brain. And as the cases mounted the company was, of course, very secretive about what was going on and about the operation of the plant in general.

So, owners of capital are often collectively driven by profit motive to reduce the amount of information that consumers have about commodities, in terms of the actual cost of production, how the commodities are made, and the true quality of the commodities. While some producers who produce high quality items may seek out ways to provide consumers with more information on the quality of their products, and may even want to provide consumers with information on how the products are made, there is a general collective secrecy among all capitalists within the market regarding actual costs of production and meaningful information on how products are made.

This brings us to the other major area of secrecy within capitalist systems and that is compensation secrecy. Virtually all private businesses tell employees not to discuss compensation with other employees, in fact non-disclosure of compensation is often written into employment contracts and can be grounds for termination. When a prospective employee applies for a job in the private sector, the employee isn't given a spreadsheet with the compensation for all of the employees at the company, nor are they told about any compensation negotiations with other prospective job applicants. Even more importantly, workers have no information telling them the value of what they produce or are expected to produce. Workers may get some idea of the value that they produce, but the estimates workers are able to formulate are highly subjective a not likely to be well-informed. Employees basically have no idea what the profit margin on their labor is for their employers, and employers want to keep it that way, unless perhaps the employer is unprofitable.

A reporting rule included in the recent Dodd–Frank Wall Street Reform and Consumer Protection Act requires that public corporations report the ratio between the compensation for the CEO and the median worker. This is meant to be a point of information for investors, workers and economists, and corporations are vigorously fighting the implementation of this regulation.

One of the roles of unions is precisely to address this market inefficiency, this lack of information on behalf of workers. On the one hand unions violate the efficient market principle of individually determined prices, but this is done in order to rectify other market inefficiencies such as lack of information by workers and disparities of power.

Many capitalist societies have implemented regulations and mechanisms intended to increase market transparency, under the understanding that transparency and information are essential to any reasonable operation of markets and that capitalists themselves have incentives to undermine market transparency. Nevertheless, capitalists have been successful in all countries, to varying degrees, in protecting certain levels of secrecy, both on a broad scale and on cases-by-case bases, and there are perpetual on-going efforts by capitalists to reduce market transparency whenever possible.

Externalities

Externalities are a huge component of profit generation. Externalities are one of the most important phenomena in economics and are a massive subject in and of themselves. Ultimately, externalities are one of the key factors which undermine classical and neo-classical market theory, which are predicated on decisions made out of individual self-interest.

There are both "negative" and "positive" externalities. Negative externalities are those things which incur a cost or harmful effect to a third party. Positive externalities are those things which bring about a benefit to a third party.

A positive externality is something like the way in which improvements to one's home can result in increased home values for surrounding neighbors. Another example of a positive externality is the impact that using solar or wind power has on both pollution levels and energy markets as a whole. For example, when someone switches from using fossil fuels to something like solar power, they are reducing demand for fossil fuels, which ultimately makes fossil fuels cheaper than they would otherwise be, so in fact people who do things reduce their use of fossil fuels make fossil cheaper for those that continue to use them. Thus, users of fossil fuels benefit from positive externalities generated by adopters of alterative energy or people who reduce their energy consumption.

Positive externalities generally result in under-production of goods and services or under-adoption of practices because the individual bearing the cost pays for benefits that are received by others who don't pay.

Negative, externalities, however, tend to have the opposite effect. Negative externalities are things like pollution and health problems. Negative externalities tend to cause goods and services to be "overproduced" because the full cost of goods and services are born by third parties, neither the producers nor consumers of the goods and services.

As a result, negative externalities are major drivers of profits because in effect they result in inherent subsidies. A polluter for example can reduce their manufacturing costs by simply dumping waste on public property instead of incurring the costs of properly disposing of it. Negative externalities are most prevalent when the harm done to others is indirect, difficult to quantify, or difficult to identify, yet negative externalities are ubiquitous in modern economies and forcing businesses to compensate for externalities that they benefit from in capitalist economies is often very difficult because of the disproportionate political power wielded by capital owners.

Putting it all together

So now that we've examined many aspects of capital ownership and market theory in detail, let's put it all together to get a comprehensive understanding of how capitalist economic systems operate, their advantages and disadvantages, and the ways in which capitalist systems inevitably lead to concentration of capital ownership.

First let's talk about profits. Profits are essentially the spread between the costs of commodity production and the revenue from sales of said commodities, which goes to the owner of the capital used to produce said commodities. Profits can also be described as the spread between the purchase price of property and the sale price of property. Thus profits are a form of income that is determined purely by ownership of property, not by work done. An owner of capital may also perform work that generates revenue, in which case the distinction between profits and wages becomes a blurry one, but profits in the truest sense are derived solely from capital ownership rights.

In essence, profits are a measure of market inefficiency because profits are not a product of value creation, they are a product of price disparity between market participants.

The more efficient a market is the smaller the disparity between costs and revenues will be, thus capitalists, i.e. the recipients of profit income, are driven by profit motive to undermine market efficiencies. Capitalist business owners are inherently motivated to maximize the efficiency of their own businesses, while minimizing the efficiency of the market. The market inefficiency that has the biggest impact on income distribution in a capitalist system is the inefficiency caused by the separation between labor markets and commodity markets. In their capacity as employers capitalists are driven by profit motive to make businesses a form of black box that obfuscates the relationship between labor markets and commodity markets.

Labor markets are less than optimally efficient for a wide variety of reasons, including some that are not under the influence of capitalists. In addition to factors such as employers' withholding of information from workers and barriers to entry imposed on job markets by employers, factors such as workforce mobility, worker desire for stability, worker avoidance of risk taking, etc. all play a role in reducing labor market efficiency in ways that tend to benefit employers.

When an individual owns their own capital and works for themselves they are neither a wage-laborer nor a capitalist, and their income is always a product of the full market value of the commodities that they produce and sell, i.e. 100% of net revenue goes to the individual who performs the work. Thus, working for one's self is the only way to ensure that a worker can receive the true value of their labor, but individual production is almost always less efficient than collective production, thus individuals are driven into collective production by market forces in an industrial economy. This is a really critical point to understand in relation to capitalism and the history of the American economy.

America's distinctive economic feature early in its history was the almost universal rate of private capital ownership among free white families. Private capital ownership ensured that individuals kept the fruits of their own labor. From the colonial era up to the Civil War America's economy was dominated by home-based production and small family businesses, where husbands owned their own capital and worked with their wives and children to create value and generate revenue for the family. Since all or most of the workers in these businesses were family members, the entirety of the income stayed within the family, and thus private capital ownership served to ensure that workers kept all of the value that they created since they employed their own labor on their own property, unlike feudal societies where all of the property was owned by an aristocracy, upon which masses of unpropertied peasants worked, with much of the fruits of their labor going to the property owners.

This condition of family based production was excellent in terms of ensuring that individuals kept the fruits of their own labor, and thus excellent in terms of creating a "just" economic system for white families (which was of course extremely unjust for Native Americans, African American slaves, and single women), but it was also very inefficient. The reality is that while individual capital ownership and individual production ensures that workers keep the full value of what they produce, collective capital ownership and collective production are far more efficient.

Capitalism is a system for facilitating collectivization of production. After the Civil War, with the rise of industrialization in America, individual and family-run businesses gave way to the market efficiencies of collective capital ownership and production. This is the real driving force of capitalism. Profits are inherently maximized when capital ownership is concentrated and production is highly collectivized. What the capitalist system does is it provides a profit motive for individuals to privately collectivize production and minimize the number of individuals who own capital. The countervailing market force to capital ownership concentration is risk sharing. While capitalism creates an incentive to concentrate capital ownership on the one hand, the ability to amass capital and the risk of doing so provides an incentive for collective capital ownership as well.

This is where corporations come in within the capitalist framework. Corporations are legal entities that have been designed to minimize the risks of capital ownership while allowing individuals to pool their resources to collectively share ownership of capital, thus facilitating a more rapid collectivization of production. Individuals and families who directly owned and operated their own capital were at a market disadvantage against corporations through which resources could be pooled and the risks of capital ownership could be shared among more individuals. However, though corporations facilitate collective capital ownership, the efficiencies of collective production and the mechanisms of collective ownership meant that in fact collective capital ownership through corporations facilitated a concentration of capital ownership.

This is due in part to the fact that by pooling resources a relatively small number of capital owners could out-compete a larger number of independent capital owners. In other words, in a market with 100 independent owner-operators, a corporation collectively owned by 10 people could generate enough efficiencies to put all 100 of the independent owner-operators out of business, and thus by pooling resources 10 capital owners could force 100 capital owners out of positions of ownership. Typically, the result is that many of those 100 former capital owners would then become wage-laborers working for the 10 capital owners, while some would pool their resources together to form larger competing corporations, and some would simply become destitute, retire, or go into a different line of work.

The important thing to understand is how in this manner collective private capital ownership can actually reduce the number of individuals who own capital.

In addition, through "public ownership" corporations are able to spread large portions of risk across a large population by selling tiny fractional shares of ownership to many, many people, while retaining controlling shares of ownership among a small number of people. This works to concentrate control of capital among a very small number of owners, while spreading risk among a larger number of individuals. In addition to risk reduction through collective capital ownership, controllers of capital in capitalist societies like America have been able to further reduce their individual exposure to risk via their disproportionate influence over legislative bodies, thus ensuring that the laws further protect them from risk exposure.

The reduction of risk exposure among capital owners and controllers thus further facilitates the concentration of capital, because as was previously stated, the countervailing force to capital concentration within a capitalist system is risk sharing. As the risks of capital ownership are reduced via government protections for capital owners through so-called "business friendly" legislation, the risks associated with capital concentration are minimized, thus enabling further capital ownership concentration. This concentration of ownership then has a snowballing effect, leading to growing influence of the capital owners over governments and economies, inducing governments to offer increasing protections to the capital owner's ever-larger institutions both due to the growing influence of their financial power and out of real fear of the impact that a failure of their institutions could have on the overall economy. Such protections, of course, then continue to facilitate further concentration of capital ownership, which is further evidenced by the large scale of corporate stock-buy backs since the beginning of the 2008 recession.

This effect can also be seen in the consolidation of banking in America in the wake of the "great recession" of 2008, during which a large number of banks failed and the government provided unprecedented protections for the largest banks.

The concentration of capital has a significant impact on labor markets as well. Today over half of the American workforce works for large companies, i.e. businesses with more than 500 employees. These businesses represent just 0.3% of private employers in the country. This means that 0.3% of the nation's private employers employ over half of the American private industry workforce.


source: http://www.census.gov/econ/smallbus.html#EmpSize

The fact that over half of the private American workforce is employed by less than 1% of the nation's businesses provides significant wage-setting power for those large employers and creates significant political leverage as well.

As capital ownership is concentrated, capital owners increasingly benefit from market inefficiencies, which is to say that the incomes of capital owners become increasingly a product of market inefficiencies, which is to say that the wealth of the wealthy increasingly represents graft from society as opposed to the creation of value.

The most difficult thing to understand about this situation, however, is that it isn't a black and white scenario. The incomes of capital owners and the super-rich are not purely graft. Capital owners and the super-rich typically do create real value and are often very high value creating individuals; the issue, however, is that while they may create large amounts of value themselves, their incomes also include large amounts of value that they didn't create, but that is instead redistributed from workers, taxpayers, and other property owners via various market inefficiencies. Not only this, but there is a high degree of variability in the ratios of real value creation to graft among the super-rich. The incomes of some of the super-rich are almost entirely graft, while others are mostly products of real value creation.

The diagram below is a crude representation of this concept. The blue diamonds represent value that is both created and received by an individual in the form of income. The red diamonds represent value that is created by an individual, but is redistributed to capital owners, and the green diamonds represent capital income that is received by capital owners. The reality is that the green diamonds are the red diamonds that were created by workers which have been transferred to the capital owners. Yet, when looking at this diagram we see that the capital owner is actually creating value themselves as well, in fact they are creating more value as an individual than the others are, but the amount of additional value they are creating relative to the others is relatively modest, yet their total income is much greater than the others because they are receiving such large amounts of capital income, which is value that is being redistributed from other workers to them.

This is, to a great extent, what makes understanding this issue so complicated; the super-rich are for the most part both the highest value creators in our economy and the largest recipients of redistribution. In fact, the super-rich are large recipients of both pre-tax redistribution and after-tax redistribution, but of these the pre-tax redistribution is by far the most important and least recognized. Pre-tax redistribution in a capitalist economy goes overwhelmingly from workers to capital owners. After tax redistribution in the current American system goes primary from middle-class and high income workers to both the poor and super-rich capital owners.

The most visible aspect of redistribution in the American economy has long been after-tax redistribution to the poor, yet much of the need for after-tax redistribution to the poor is a product of the pre-tax redistribution from the poor in the first place, who typically see the largest portion of the value that they create going to the profits of capital owners.

This redistributive effect, then, in addition to the forces of competition, industrialization, economies of scale, market inefficiencies, and government subsidy of capital owners, results in increasing concentration of capital ownership over time in capitalist economies.

Problems and Solutions

The most fundamental criticism of capitalism as an economic system has always been that over time there would be an inherent tendency toward concentration of capital ownership, and that this concentration of capital ownership would, in the end, undermine not only the very basis of the economy, but the entire social and political system as well, indeed undermining democracy itself. Concentration of capital ownership within the capitalist framework is predicted by fundamental economic theory and this prediction is further strengthened by social and political science in that social and political understanding easily explain how the inherent economic tendencies toward capital ownership concentration within a capitalist system will be compounded by social and political institutions as a result of the growing power and influence of capital owners as they consolidate ownership over capital.

America's own history, as arguably the world's flagship capitalist society, confirms this prediction of increasing capital ownership concentration within capitalist systems. The economic reforms of the 20th century in America and Western Europe never changed the underlying fundamentals of capitalism, and as a result, concentration of capital ownership continued unabated throughout the 20th century, even after the rise of so-called welfare-state systems in America and Western Europe. Indeed welfare-state systems enabled the continuing growth of capitalist economies amid increasing concentration of capital ownership.

What the economic reforms of the 20th century did in capitalist economies is they enabled the existence of a non-capital owning middle-class. The welfare-state reforms, by ensuring that workers would receive a greater share of revenue and by creating social safety nets, broke the very strict relationship between capital ownership and compensation, which, while leading to real improvements in quality of life for the working class and sustaining economic growth over the relatively short-term (a few decades), ultimately enabled continuing concentration of capital ownership.

This is reflected in the chart above in the growth of non-capital wealth owned by the bottom 90% of households in America following the New Deal reforms. What the economic reforms did was lead the middle-class to believe, in both America and Europe, that the fundamental structure of capitalist economies was sound and that capitalist economies could be equitable. This was really a facade, however, with the continued functioning of capitalism only made possible by both the re-distributive welfare-state reforms and massive debt, both private debt and government debt, used to prop-up both the welfare-state and capitalists themselves.

Meanwhile, throughout the latter half of the 20th century, actual capital ownership continued to be consolidated behind the facade. As capital ownership became increasingly concentrated both the incomes and political power of capital owners increased exponentially, leading to the resurgence of economic inequality in capitalist economies. The fundamental "flaw" of the economic reforms of the 20th century was that they did nothing to address the underlying structure of capital ownership, hence the equitableness of the reforms was merely superficial. It was always inevitable that without reversing the concentration of capital ownership, eventually capital ownership would become so concentrated that the political and economic power of capital owners would become so consolidated and so great that they would be able to use that power to undo the reforms that enabled a non-capital owning middle-class to exist. In addition,  even without the intentional elimination of these reforms, preventing on-going economic crises in capitalist economies would require continuous expansion of the welfare-state.

Capitalist economies around the world are now faced with a battery of internal contradictions. Both the governments and the workers/consumers in capitalist economies are saddled with large amounts of debt, while at the same time the portion of economic output going to workers/consumers is also falling as the effects of concentrated capital ownership play out, thereby reducing the ability of those entities to generate economic demand.


source: http://www.pbs.org/newshour/businessdesk/2013/01/merle-hazard.html

Therefore, despite continuous increases in productive capacity, consumptive capacity is stagnating or falling in mature capitalist economies, which may not be a bad thing from an environmental perspective, but it is undeserved from an economic perspective.

The effects of highly concentrated capital ownership are not only economic, however, they are also societal. As capital ownership is consolidated a smaller and smaller portion of the population gains increasing influence over culture and conversely communities lose control over culture. As control over culture becomes consolidated, the relationship between producers and consumers becomes increasingly predatory, because producers are increasingly removed from the community. Producers become less and less concerned with community standards, morals, and concerns because they are less and less impacted by them.

In a local economy where production takes place within the community and producers and consumers live in contact with each other, producers are a part of the community and affected by the impacts of their own products, both directly and indirectly via their relationships with community members. When producers are far removed from consumers, then the impacts of their production on the community are not felt by producers and of less interest. As a result, the relationship between producers and consumers becomes one driven purely by profits.

As an example, a local clothing maker with a family living in and serving a local community market would not be very likely to make sexually provocative clothing for young teenagers and market that clothing directly within the community, yet that same clothing maker would be much more likely to produce sexually provocative clothing and market it to young teenagers if they were serving a global market of millions of consumers, because the scale of the market and the separation from the community insulates them from the societal impact of their own production, and the economic benefits to themselves from increased sales outweighs any negative social reaction since they are removed from it.

Likewise, whereas the parents of teenagers might feel empowered to confront a local producer and have market power to effect a small local producer, thereby putting pressure on a local producer not to make and market sexually provocative clothing to their children, consumers feel disempowered, and are in actual fact disempowered, in their relationships with large multi-national corporate producers. So the parents of children don't have the same type of control over large multi-national corporate producers and marketers that they do over small local businesses, and likewise, large corporate producers don't have a localized community relationship to the markets that they serve, and are thus differently incentivized than producers who live and produce within local communities of which they are members.

Ultimately what this means is that as capital ownership becomes consolidated, fewer and fewer individuals have control over capital, which means that fewer and fewer individuals have control over production. Culture is largely a product of production; the things that we produce define our culture. This means that our culture becomes defined by a smaller and smaller portion of the population as capital ownership is consolidated, which of course means that families and communities have diminishing control over culture.

Maintaining control over the means of production is a key part of how the Amish preserve their culture. Amish society is a communal society in which they forego the use of most modern technology. Foregoing the use of modern technology preserves a pre-industrial economic structure, helps to ensure relatively equal property distribution among the Amish, and preserves their communal way of life. The Amish also preserve their culture by producing the commodities that they consume themselves, within their pre-industrial communal framework. This is what gives them control over their culture. The entire economic system of the Amish is designed to foster communal cohesiveness and to preserve the traditional values and way of life of the Amish people. This is a key understanding, because the Amish demonstrate the critical role that the means of production and the economic system plays in determining culture.

The approach that the Amish have taken to preserving their culture and communal economic system is a Luddite type of approach. Though the Amish are an extreme example of this type of opposition to industrialized capitalism, it is also common for liberal opponents of capitalism to view technology as a part of the problem. It is certainly true that advances in technology can facilitate consolidation of capital, but let's not forget that capital ownership was highly concentrated under the feudal system, and capital ownership was consolidating rapidly in the American South under the plantation system prior to the Civil War as well, so concentration of capital ownership can occur even without advances in technology.

The "traditional" approach to serving the interests of workers in capitalist economies has been to try and protect the "jobs" and wages of workers. Within the capitalist framework workers often view their interests as being in opposition to technological advances because the focus is on "preserving their jobs" and preserving the value of their labor. There is the view that as technological advances are implemented in the workplace, and the workplace becomes more efficient, fewer workers are needed to produce the same or more output, and thus workers will lose their jobs, which is their primary or only source of income. As such, there is a tendency among workers to oppose significant labor saving efficiencies in order to preserve their jobs, and thus their source of income. This is an even more common phenomenon in Europe where unions and tradition are more powerful than in America.

There is also a tendency toward "smallism" within capitalist economies, which is favoritism of "small" or "independently owned" businesses. There is a recognition that owner-operators truly "earn" a larger portion of their income, and a belief that small businesses better serve the local communities, etc. Small business have less individual power over workers and often tend to be more concerned with community standards and community interests.

The problem is that opposing technological advances in production in order to preserve jobs stifles efficiency which ultimately undermines the economy, and small businesses tend to be less efficient than larger businesses that can take advantage of economies of scale. This ultimately ends up leading to lower productivity and lower incomes for workers. The central problem goes back to the distribution of capital ownership. As businesses increase in size and market share, ownership and control over capital is naturally consolidated, and as production processes become more efficient fewer workers are needed to produce the same quantity of goods and services.

Opposing technological advances and favoring small businesses (in some cases through economic subsidies or regulations) are ways of trying to reduce capital ownership concentration within a capitalist framework through structural constraints, which necessarily reduce productivity and economic efficiency. Conversely, maximizing efficiency through capital improvements and allowing capital ownership to become concentrated not only undermines economic fairness and invites corruption, but also ends up undermining economic potential as well since the consumptive capacity of the working class is reduced as they inevitably receive a diminishing share of the wealth created by the economy as the share of wealth creation going to capital owners increases.

This is why capital ownership has to be broadly shared in some way in order for an economy to maximize its potential over time, and in order to reduce economic exploitation and unfairness as well as undo influence of a relatively small number of individuals over the economy and culture. The question is how to achieve this? The major Communist and Socialist regimes of the 20th century all basically sought to do this by making "the state" the sole or major owner of capital, and thus calling the capital owned by the state "publicly owned". These implementations are often considered "State Capitalism" because the government essentially took on the role of "capitalist", owning and directing the use of capital, managing labor relations, and setting wage compensation, using the "profits" from enterprise as a source for government funds, which were in theory intended to be used to provide benefits to the citizens.

The problem with this is somewhat obvious, especially with historical hindsight. Principally, capital ownership is still concentrated under such a system and many of the same problems with private capital ownership concentration persist, on top of additional problems created by the politicization of essentially all employment and economic activity, and a host of other issues that go without saying.

An alternative to this is something called Distributionism, which essentially means ensuring that capital ownership is widely, almost equally, distributed among all individuals in the population. The Distributionist movement was originally started by religious conservatives in the 19th century, and was at that time a somewhat anti-industrial movement, however the basic principle of equal distribution of private capital ownership can be applied within a progressive pro-industrial framework as well, as I lay out in A Progressive Foundation for America's Economic Future in the section titled Implement a National Individual Investment Program.

The critical aspect of a progressive distributionist system would be that the government, instead of owning or controlling capital, would be used merely to ensure equal distribution of capital ownership to individuals, but capital ownership would remain private. Far more needs to be done than the National Individual Investment Program that I described in the aforementioned article, but the central point is that capital ownership has to be far more equitably distributed. As technology and efficiency progress it is imperative that everyone have significant capital income so that capital income can become an increasing share of national income.

The traditional situation in capitalist economies pits the interests of wage-laborers against capital owners, whereby the gains of one are essentially the losses of the other. What we see in capitalist economies around the world is that the share of income going to capital is increasing, while the portion of the population receiving that income is decreasing, so a larger and larger share of income is going to a smaller and smaller portion of the population. The reaction to this situation by many people who oppose growing income inequality has been to advocate for reversing this trend by increasing labor's share of income, but this is an entirely wrong approach. Instead of increasing labor's share of income, what needs to be done is that capital ownership should be more broadly distributed so that the increasing capital income goes to everyone instead of just to a small number of people.


source: http://www.theatlantic.com/business/archive/2013/02/the-mystery-of-the-incredible-shrinking-american-worker/273033/

The problem with the traditional dynamic within capitalist economies is that it induces workers and policy-makers to try and sustain or prop up labor's share of income, a tactic which is ultimately unproductive. If everyone is an equal owner of capital, however, then the share of income going to labor becomes significantly less important. The problem with a low share of income going to labor in a traditional capitalist economy is that labor is the only meaningful source of income for the vast majority of the population. The middle-class exists on labor-income and essentially only makes use of capital income in retirement, while the poor have essentially no capital income throughout their lives. Without the income from labor capitalist economies would collapse, but this is only because capital ownership is so concentrated and the fact is that in a "free-market" capitalist economy increasing capital ownership concentration is inevitable, as we have seen.

This puts capitalist economies in a position where "jobs" have to be created and maintained purely for the sake of distributing income, not for the sake of production. The purpose of an economy is not to "create jobs", the purpose of an economy is to create wealth. Jobs are merely a means to that end. Jobs are something that we should be working to eliminate, not create. The only reason that there is any talk of "creating jobs" is because jobs are the means of distributing wealth to the poor and middle-class within capitalist economies, but if everyone actually owned meaningful capital, i.e. if everyone were a real "capitalist" instead of just 1%-2% of the population beings capitalists, then creating jobs would not only become unimportant, but in fact everyone would benefit from the elimination of jobs and the economy would be on sound footing to truly embrace efficiency. Productive automation and maximizing efficiency can only be fully embraced when the dependency on "jobs" as a means of distributing wealth is eliminated.

This doesn't mean that jobs or working would be eliminated overnight; what it means is that the share of income between labor and capital would become far less important and eliminating jobs would become a means of increasing everyone's income, not just the incomes of a tiny minority of super-rich capital owners. If, in theory, capital ownership were truly equally distributed what that would mean is that no individual would have massive capital income, but everyone would have moderate capital income. Even with equal distribution of capital ownership, given the current state of technology the majority of everyone's income would still come from labor, but instead of having some people who receive billions of dollars a year in capital income with capital income making up 99% of their income and other people having no capital income at all, everyone's income would be roughly 10% to 30% from capital and the rest from labor. But that means that even if you lose your job, you would still retain 10%-30% of your prior income automatically, which is far better than losing 100% of your income as is the case for most poor and middle-class workers, and this in turn would reduce the need for government-provided safety nets (unemployment benefits, welfare programs, disability benefits, etc.).

Some people would still be paid millions of dollars a year in labor compensation and some people would still only be paid minimum wage, so income inequality would still exist and "reward" for an individual's contributions would still be a part of the economic system, as it should be, but the current massive income inequality which is a product of capital ownership concentration would be greatly reduced.

But truly equal ownership of capital would never be achieved at any rate; all that could ever be achieved is a "far more equal" distribution of capital ownership. That is because the only form of capital ownership that could easily be distributed would be shares of corporations. Individuals would still directly own private capital even in a distributionist system, meaning that small business owners would still own their small businesses independently and retain full ownership. There is absolutely nothing wrong with that, because small business ownership is actually just another means of distributionism - in fact it is the original means by which capital ownership was widely distributed in America in the first place.

The important aspect of a progressive distributionist system is that it embraces industrialization, automation and economies of scale, while ensuring universal capital ownership.

Many other reforms of capitalism are also needed, but the relatively equal distribution of capital is key to all other reforms of capitalism, because many of the problems inherent in capitalist economies are products of concentrated capital ownership and the motives that concentrated capital ownership inspire. In addition, capital ownership is a source of political power, so if capital ownership were more equitably distributed political power would be as well, which would in turn further facilitate democratic reforms of capitalist systems.

Summary and Conclusion

The United States of America has been, in many ways, the ideal test case for capitalism. The widespread ownership of private capital prior to industrialization provided an advantageous base upon which to build a so-called "free enterprise" capitalist system. It has to be recognized that the conditions which made widespread capital ownership possible in the United States of America were a historical anomaly - the result of a technologically advanced civilization invading a large land mass with a small and significantly less technologically advanced population which they essentially exterminated, making it possible for virtually all free families of those invaders to become property owners. In addition, the policies of the early American government aided widespread capital distribution through policies that subsidized individual capital acquisition and policies that were designed to ensure that capital (primarily in the form of land) would be at least somewhat equally distributed among individuals instead of concentrated into the hands of a few powerful individuals.

The relatively widespread prosperity present in early American society was a direct product of this relatively widespread capital ownership. In addition, the slavery system present in the early United States essentially subsidized all of white society, even non-slaveholders, by reducing the cost of materials for everyone. So freely available land/capital, along with "free labor", played a large role in making "white" society highly egalitarian in early America.

The question for capitalism as an economic system was, once the "free" capital (land) was all gone, an end was put to slavery, and industrialization had commenced, would the "free enterprise" capitalist system be able to continue to provide widespread economic prosperity? By the end of the 19th century many people in America and around the world had begun to doubt that it would, and these doubts were only strengthened with the global economic depression of the 1930s.

Many different solutions to the economic crisis of the 1930s were implemented in industrialized countries around the world, ranging from socialist to fascist to welfare-state capitalist systems. What essentially all of the major "solutions" had in common was that they increased regulation of the economy, created social safety nets of some kind for at least some subset of the population, and further centralized control and ownership over capital.

In the case of ostensibly socialist systems capital was centralized under government control, largely  against the interests of existing capital owners. In the case of fascist systems capital was centralized under government control largely in favor of the interests of existing capital owners, and in the case of welfare-state systems capital generally became more centralized under the control of private capital owners, while control of capital became more highly regulated and taxed by governments.

Arguably, the only systems under which capital control became more widespread were some of the socialist systems that "redistributed" land during the 1930s-1960s, but this redistribution wasn't entirely meaningful because in most cases the redistributed land became "collectively owned", or rather still owned by the state.

The welfare-state reforms which dominated "developed" capitalist economies like those of the United States and Western Europe had the effect of providing increased economic security and stability in capitalist economies and ensuring that prosperity was more widespread by improving the rights and protections of non-propertied workers and by increasing the share of created value received by non-propertied workers through regulation, as well as creating social safety nets for workers, non-workers, and capital owners.

These welfare-state reforms, however, treated the symptomatic problems of capitalism without curing the underlying causes of those symptoms. This actually enabled growth of the underlying root cause of imbalance in capitalist economies, namely concentration of capital ownership.

Ultimately what we see when looking at the economic history of the United States is constantly increasing concentration of capital ownership over time in the world's flagship capitalist economy, under every regulatory scheme and set of market conditions. We have to therefore conclude that increasing concentration of capital ownership is an inherent outcome of capitalist economies. We must also recognize that concentrated capital ownership undermines the very principles that the proponents of capitalism often espouse. However, it would be incorrect to claim, as Karl Marx did, that capitalist economies will "destroy themselves" or that they will cease to be able to function. That isn't true. After all, feudal economies persisted for thousands of years. Will capitalism inherently reach a state of "crisis"? Not if by crisis we mean a state when the economic system will completely fail and the ruling class will lose power much like the ruling aristocracies lost power with the end of feudal economies. No, that won't inherently happen. It might happen, but it isn't inevitable.

What will happen, unless capital ownership itself is massively redistributed, is that economic disparity will increase, a ruling class will become entrenched, and the working class will receive an ever-shrinking share of the value created in the total economy. That condition can persist for centuries, just as feudalism persisted for centuries.

To understand the concentration of political power in America over time we have to understand that American economic populism was originally rooted in the socially conservative farming population. When America was founded the majority of the population were farmers and they owned the majority of the capital. As such, they had tremendous political power. As the farming population shrank, the political power of populism shrank, because what made American farmers a unique political force was the fact that they were populist capital owners. As farmers shrank from 85% of the population when the country was founded down to around 40% of the population  at the turn of the 20th century their political power was waning, but remained relatively strong, and it was the political power of the farmers that made many of the populist reforms of the early 20th century possible.

By the 1930s the political power of the farmers was growing weaker still, but remained strong enough to help usher through the major economic reforms of the New Deal. Without the political backing of America's farmers it is a virtual certainty that the populist reforms of the New Deal wouldn't have been possible. The fundamental reason that American economic populism died shortly after World War II is the industrialization of farming. By the 1970s farming had become an industrialized, dominated by corporations instead of individuals, and farmers had shrunk to about 5% of the population. Not only had the population of farmers shrunk, but farming was now dominated by corporations, so what political power was retained by the farmers became aligned with broader corporate interests.

This I think is key to understanding the transformation of American economic populism. American farmers were the only meaningful group of capital-owning populists. Once their political power waned and their interests became aligned with corporations, the only remaining populists were non-capital-owning workers, who, inherently, by virtue of their lack of capital ownership, had (and continue to have) relatively little political power. 

The failure of American economic thought is the failure to grasp and internalize the paradox of property rights. Yes, private capital ownership is what made America a great country and it is what made American society relatively egalitarian and it is what made America a land of opportunity, but that only worked because there was a large stock of "unclaimed" freely available capital, in an economy dominated by individual economic production. This situation, by its very nature, essentially negated the conflict between labor rights and property rights.

However, if we take John Locke's statement on property rights to be true, that the right to property is a product of labor, then it becomes obvious that owning capital necessarily deprives other individuals of their right to property. The only time that this isn't true is when every individual owns their own capital and all production takes place on an individual scale. Since that was largely the condition in the early American economy (for white males), the association between private capital ownership and retention of the products of one's own labor became ingrained in the American psyche. But as soon as an individual exercises their labor upon capital owned by another individual the paradox of property rights is raised. The early American economy did not resolve the paradox of property rights, it merely avoided the paradox (for white families) by virtue of having access to an essentially unlimited supply of free capital in the form of land, and American economic thought has failed to come to grips with this situation ever since.

It is clear that the incomes of today's large capital owners are not a product of their own labor, but rather their incomes are the result of redistribution of value created by millions of other people, ultimately workers all over the world. Today there are individuals, largely in the financial industry, with incomes of over a billion dollars in a single year, virtually all of which is derived from capital ownership. It is impossible to attribute these incomes to the labor of the individuals who receive them, yet I think that most people fail to grasp just how large these incomes are. The chart below provides a scalar comparison between 1 billion, 1 million, and 100 thousand for reference.

To claim that any individual creates billions of dollars of value in a single year with their own labor, while the average American creates around fifty thousand dollars a year, is a total farce, but that income has to come from somewhere, it has to be created by someone's labor if not theirs.

The reality is that the incomes of today's large capital owners are little different than the incomes of the feudal aristocracies. The only difference is that today it is easier for a non-capital owner to become a large capital owner than it was for a non-aristocrat to become aristocracy in the past and becoming a large capital owner is more merit based, but the incomes are still derived in essentially the same way: They are products of private taxation, the right to which is granted through property ownership. Property rights grant ownership of value created by other individuals to owners of capital. The incomes of today's large capital owners are products of concentrated property ownership taxing away value created by the majority of the non-capital owning population, just as they were in feudal times.

There is no easy resolution to the paradox of property rights, there is no meaningful way to determine the true "labor value" contributed by an individual in an industrial economy because the relationships between labor and value creation are far too complex and the collective nature of production necessarily produces value that is greater than the sum of the individual contributions. In addition, as technological progress continues, assuming that it does, automation makes reliance on labor as a primary source of income increasingly misguided anyway.

This is why the labor movement's efforts in capitalist economies to increase the wages of workers has been largely misdirected for the past half century. Instead of focusing on increasing wages, the focus should be on distributing capital ownership. However, the "blame" for this focus does not rest entirely on the labor movement, because capitalists themselves have intentionally kept the focus on wages as opposed to capital ownership. In other words, in capitalist economies advocating for higher wages is easier than advocating for capital distribution, because ultimately ownership and control of capital is what's most important to capitalists. Capitalists know that in the long-run retaining control of capital is more important than temporary wage concessions.

But in order to have an equitable economy that can maximize growth and efficiency, as well as democracy, capital ownership has to be relatively equally distributed in some way, via some mechanism. Capital distribution is ultimately far more important than wage distribution. This is the most important lesson that we can learn from America's economic history.

The realization that capital ownership inherently becomes more concentrated over time in capitalist economies is nothing new and has been theorized for over a century, but today the evidence for this has become overwhelming.

Private capital tends to become concentrated in few hands, partly because of competition among the capitalists, and partly because technological development and the increasing division of labor encourage the formation of larger units of production at the expense of smaller ones. The result of these developments is an oligarchy of private capital the enormous power of which cannot be effectively checked even by a democratically organized political society. This is true since the members of legislative bodies are selected by political parties, largely financed or otherwise influenced by private capitalists who, for all practical purposes, separate the electorate from the legislature. The consequence is that the representatives of the people do not in fact sufficiently protect the interests of the underprivileged sections of the population. Moreover, under existing conditions, private capitalists inevitably control, directly or indirectly, the main sources of information (press, radio, education). It is thus extremely difficult, and indeed in most cases quite impossible, for the individual citizen to come to objective conclusions and to make intelligent use of his political rights.
- Albert Einstein; Why Socialism, 1949

Recognizing and understanding the concentration of capital ownership that has taken place over the course of America's history, and the implications of it, is therefore of extreme importance.

America's economy has evolved over time. The conditions that existed in America's early economic history no longer exist today, and therefore many of the economic principles that ensured shared prosperity in America's early economic history no longer have the same effect today. The Communist  and Socialist movements of the 20th century may have ultimately been failures, but the problems of capitalism that they sought to address still remain. The failures of those movements don't vindicate capitalism, all that they do is show that those regimes were not the solution, but the problems inherent in capitalism still remain; they didn't go away because some alternative approaches failed.

The primary lessons that we learn from the failed Communist and Socialist regimes of the 20th century is that centralization failed, yet when we look at capitalist economies today what we see is that centralization has taken place as well. Today the global capitalist economy is as much or more centralized than the Soviet Union ever was, the difference is that the economy is centralized under the control of powerful private interests who exercise heavy influence over governments around the world, as opposed to being centralized under the direct control of government.

A 2011 study found that a mere 147 transnational corporations (less than 1% of transnational corporations) controlled 40% of global capital. The study found, "that network control is much more unequally distributed than wealth. In particular, the top ranked actors hold a control ten times bigger than what could be expected based on their wealth."

While this study examined only the relationships of transnational corporations, the findings can be broadly applied to conclude that generally speaking, control over capital is actually more concentrated than mere wealth distribution would indicate. This actually makes sense based on how controlling shares of corporations work, which is what allows entities to own less than half of the value of a corporation while retaining controlling interest. As capital ownership and control become increasingly concentrated, the ability of capitalists to undermine market conditions in order to preserve profits increases.

This is the reality that we have to acknowledge and face. Ownership and control of capital is far more concentrated today than at any time in America's history and this fact is a product of capitalism itself. Ultimately, attempts at economic and political reform which don't fundamentally address the distribution of capital ownership are doomed to be temporary and ineffectual at best. The underlying cause of economic inequality and political disenfranchisement in capitalist economies around the world, particularly in America, is concentration of capital ownership. This concentration of capital ownership undermines the principles upon which America was founded, and now creates the type of feudalistic social and economic conditions that early Americans fought so hard against.

 

 
 
Copyright © 2003 - 2010  Website Launched: 5/22/2003  Last Updated: 9/28/2010  Contact: gp@rationalrevolution.net