When the value of human labor is decimated by advances in robotics and artificial intelligence, serious restructuring will be needed in our economic, legal, political, social, and cultural institutions. Such changes are being planned for by approximately nobody. This is rather worrisome.
If every conceivable human job can be done better by a special-purpose machine, it won't make any sense for people to have jobs with corporations to earn wages which they exchange for goods and services. This isn't the first time an entire paradigm of civilization has become obsolete; the corporation itself is only 500 years old. Before that, a "job" meant a single project; craftsmen and merchants traveled from city to city, as businesses onto themselves.
The corporation is useful because it can bring hundreds, thousands, or even millions of people together, working towards a common purpose with central planning. Countries are useful for very similar reasons, but at a larger scale and with the added complexity associated with enforcement (of laws, borders, and "the national interest"). But as technology advances, the context that gives power to those sorts of institutions will shift dramatically. As communication becomes cheaper, higher speed, and more transparent, central planning becomes a less appropriate tool for bringing people together. One of the most centrally planned organizations in history, the Soviet Union, was brought down by the fax machine (which enabled citizens to bypass the state-run media). The corporation system is much more adaptable, but we're already seeing it struggle with the likes of "hacktivist" group Anonymous.
More importantly, future technologies will be used to bring people together in new, fluid structures with unprecedented productivity. New sorts of entities are emerging, which are neither countries nor corporations, and in the not-too-distant future, such agents will become less "fringe," ultimately dominating both industry and geopolitics.
Individual humans will eventually fade even from the social world. When you can literally wire your brain to others, who's to say where you stop and they begin? When you can transfer your mind to artificial embodiments, and copy it as a digital file, which one is you? We'll need a new language, a new conceptual vocabulary of everything from democracy to property to consciousness, to make sense of such a world.
One possible way to begin developing such ideas is reflected in my title. It's a bit of wordplay, referring to three disparate intellectual movements: transhumanism, which predicts the emergence of technologically enhanced "posthumans"; human geography, which studies humans' relations with each other and the spaces they inhabit; and posthumanism, a school of criticism which revisits the usual assumption that individuals exist. These fields have much to learn from each other, and I think combining them would be a good start to addressing these issues.
For example, transhumanist stories about the future (like most stories) feature individuals as characters. They may be able to communicate ideas to each other "telepathically," in addition to other new capabilities, like being able to move one's mind from one embodiment to another (thus avoiding most causes of death). But they are still recognizable as people. However, with high-bandwidth brain interfaces, as posthumanist critics tell us, we may need to revisit the assumption that individuals would still be identifiable at all.
Human geography tends to ignore technological trends unless they are incremental, or at least, easily described in terms of existing concepts. Transhumanist technologies would dramatically change the geographic picture (as a simple example, solar-powered humans wouldn't need agriculture). These geographic changes, in turn, would have significant consequences for the trajectory of transhumanist technology and society.
Posthumanism is highly abstract; perhaps predictably, thinkers who are interested in questions like "Do individuals exist?" or "What is the meaning of 'identity'?" tend to be fairly uninterested in questions like "In light of this, how will equity markets need to evolve?" They also tend to be generally suspicious of technology, often citing the distorting influence of the mass media. If they're aware of transhumanism at all, they criticize it for its individualism, and then move along, rather than looking deeper and seeing the possibility of realizing totally new types of identity structures with these future capabilities. Finally, although posthumanists do talk about economies and societies, it's usually without reference to real-life data or even historical examples.
Personally, I have little doubt that the next paradigm of civilization will be a change for the better. I'm not worried about that. But the transition from here to there might be painful if we don't develop some idea of what we're getting into and how it might be managed.