By the late 1980s, the software industry had convinced itself that objects were the answer to everything. Object-oriented programming was ascendant. C++ was eating the world. Smalltalk veterans were colonizing enterprise IT. And if objects were the right way to structure programs, then surely they were the right way to structure networks of programs too. The distributed object was the logical next step — or so it seemed.
What followed was one of the most ambitious, expensive, and ultimately instructive detours in the history of computing. Two competing visions — CORBA from a consortium of vendors, and COM/DCOM from Microsoft — tried to make distributed objects real. Both produced technology that shaped a generation of software. Both, in different ways, buckled under the weight of their own ambition. And both left behind ideas that we still use today, even if we have mostly forgotten where they came from.
CORBA: Design by Committee, at Scale
The Object Management Group was founded in 1989 by eleven companies, including Hewlett-Packard, IBM, Sun Microsystems, Apple Computer, American Airlines, and Data General. The founding executive team, led by Christopher Stone and John Slitz, had a charter that was breathtaking in scope: to create “a common architectural framework for object-oriented applications based on widely available interface specifications.” In plain language, they wanted a universal middleware layer. Any object, written in any language, running on any platform, should be able to call methods on any other object, anywhere on the network.
CORBA 1.0 arrived in October 1991. It defined three foundational pieces: an object model, the Interface Definition Language (IDL), and a set of APIs for the Object Request Broker (ORB) — the runtime that would mediate between clients and the objects they wanted to talk to. The idea was elegant. You would describe your object’s interface in IDL, a declarative language deliberately divorced from any implementation language. An IDL compiler would then generate stubs and skeletons — client-side proxies and server-side dispatchers — in whatever language you were using. C, C++, Smalltalk, Ada, COBOL: it didn’t matter. The ORB would handle the rest. Location transparency was the explicit goal. A client calling a method on a CORBA object should not need to know or care whether the object lived in the same process, across the hall, or across the ocean.
The ambition was real, and for a time, so was the traction. CORBA powered airline reservation systems, telecommunications infrastructure, and financial trading platforms. The technology worked, often impressively, in controlled enterprise environments where teams could standardize on a single ORB vendor and a narrow set of use cases.
But the OMG’s consensus-driven standardization process carried a fatal flaw. To adopt a specification, the OMG would solicit competing proposals and then — rather than choosing one — merge them. Michi Henning, a core participant in CORBA standardization and co-author of Advanced CORBA Programming with C++, documented this pathology in devastating detail in his 2006 ACM Queue article, “The Rise and Fall of CORBA.” As Henning explained, by combining features from competing submissions, specifications ended up as “the kitchen sink of every feature thought of by anyone ever.” This made them larger and more complex than necessary and introduced inconsistencies where different features would subtly interact and cause semantic conflicts.
CORBA 2.0, adopted in December 1994, attempted to solve the critical problem of interoperability between different vendors’ ORBs by defining the Internet Inter-ORB Protocol (IIOP). Now a client using IONA’s Orbix could, in theory, talk to an object hosted on Visigenic’s VisiBroker. In practice, ORBs varied so wildly in their implementations — transactional support, error recovery, threading behavior — that writing truly portable CORBA code remained excruciatingly difficult. The specification mandated IIOP compliance but left enormous latitude in everything else. Companies found it hard to hire expert CORBA programmers because the learning curve was so steep.
The core CORBA specification eventually exceeded 1,100 pages, and that was just the foundation. Layer on the CORBAservices (naming, trading, events, transactions, security, persistence, and more), the CORBAfacilities, and the domain-specific specifications, and you had a library of standards that no single human being could hold in their head. Henning noted that the OMG did not require a reference implementation for a specification to be adopted, which opened the door to “castle-in-the-air specifications” — standards that turned out to be partly or wholly unimplementable because of serious technical flaws.
And then the web arrived. During CORBA’s growth phase in the mid- and late 1990s, the computing landscape shifted seismically. Java appeared. Web browsers proliferated. HTTP became the universal transport. CORBA’s binary IIOP protocol could not traverse firewalls. Its traffic was unencrypted by default. It had no versioning support. While the OMG debated security specifications and vendors implemented proprietary extensions, companies were already building their e-commerce infrastructures on web browsers, HTTP, Java servlets, and Enterprise JavaBeans. CORBA didn’t die overnight — it lingered in telecommunications, defense, and financial systems for years, and the Java platform quietly embedded a CORBA ORB in every JDK through Java 8. But as a vision of the future, it was over by the early 2000s.
COM: Microsoft’s Pragmatic Binary Standard
While the OMG was building castles of consensus, Microsoft was solving a more concrete problem: how do you make Windows applications share components?
The story begins with Dynamic Data Exchange (DDE), a protocol from the 1980s that let Windows applications exchange small amounts of data. DDE was crude — it used Windows messages to coordinate transfers and was limited, fragile, and hard to program against. In 1990, Microsoft shipped OLE 1.0 (Object Linking and Embedding) as a layer on top of DDE. OLE enabled compound documents: you could embed an Excel spreadsheet inside a Word document, and when you double-clicked it, Excel’s editing interface would appear in place. For users, it was magic. For developers, it was a nightmare of callback protocols and data-format negotiation.
OLE 2.0, released around 1993, was a complete rewrite — and it introduced the technology that would outlast OLE itself. To build a robust compound-document system, Microsoft needed a general-purpose way for binary components to discover each other’s capabilities and manage their lifetimes. What emerged was the Component Object Model.
COM’s genius was its minimalism at the binary level. A COM object is, at its core, a pointer to a virtual function table (vtable) — an array of function pointers, laid out in a fixed order, that any language capable of calling through a function pointer can use. This was not an academic exercise. It meant a Visual Basic application could use a component written in C++, which could use a component written in Delphi, without any of them sharing a runtime, a compiler, or even a common notion of what a “string” was. The binary contract was the interface.
Every COM object had to implement one interface: IUnknown. It had exactly three methods. QueryInterface let you ask an object, “Do you support this other interface?” — a runtime capability-discovery mechanism. AddRef and Release implemented reference counting for lifetime management. That was it. Three methods, and from them, an entire component ecosystem grew.
Don Box, who would become COM’s most prominent evangelist, later recalled that understanding COM took him roughly six months of struggle after Microsoft revealed the technology to the world in 1993. His 1998 book Essential COM became the canonical text — not just documenting how COM worked, but why it worked the way it did. Box famously declared “COM is love,” a sentiment that captured both the elegant inevitability of COM’s design and the Stockholm syndrome of anyone who had debugged apartment-threading issues at 2 AM.
Ah, apartments. If IUnknown was COM’s elegant core, apartment threading was its torture chamber. Because COM objects could be written in languages with wildly different threading models — Visual Basic assumed single-threaded execution; C++ developers might use threads freely — COM introduced the concept of “apartments” to manage the mismatch. A Single-Threaded Apartment (STA) guaranteed that only one thread would ever call into an object, with COM marshaling cross-thread calls through a hidden window message pump. A Multi-Threaded Apartment (MTA), introduced with Windows NT 4.0, allowed multiple threads to call an object directly. Getting this wrong — and it was remarkably easy to get wrong — produced deadlocks, memory corruption, and the kind of bugs that only manifested under load in production. The apartment model was COM’s answer to a real problem, but it imposed a conceptual tax that no amount of documentation could fully defray.
OLE, rebuilt on COM, delivered genuine user-facing value. Compound documents worked. In-place activation worked. Drag-and-drop between applications worked. Microsoft’s own Office suite was the showcase — and the fact that Office’s internal component architecture was dogfooding the same technology it was selling to third-party developers gave COM a credibility that CORBA’s specification-first approach could never match.
DCOM: COM Meets the Network
Emboldened by COM’s success on the desktop, Microsoft asked the natural question: what if COM objects could live on other machines? Distributed COM (DCOM) launched as a beta for Windows 95 on September 18, 1996, and shipped with Windows NT 4.0. The pitch was seductive: the same QueryInterface/AddRef/Release model, the same vtable contracts, the same IDL descriptions, but now the ORB-equivalent (the COM runtime) would transparently marshal calls across the network using a protocol layered on top of DCE/RPC.
DCOM hit the same wall that CORBA was already crashing into, only faster. Firewalls hated it. DCOM’s use of dynamic port allocation and RPC endpoints made it a nightmare for network administrators trying to secure their perimeters. Configuration was brittle — the DCOM Configuration utility (dcomcnfg) became notorious as one of the most confusing administrative interfaces Microsoft had ever shipped, and that is saying something. Security was an afterthought bolted on through Windows’ existing authentication mechanisms, which were designed for LAN environments, not the internet.
More fundamentally, DCOM was Windows-only. Microsoft made a brief attempt at cross-platform DCOM — Don Box himself worked as a consultant to Software AG and Microsoft on a Unix-based DCOM implementation in 1996 — but the effort never achieved real traction. In a world where the web was making platform neutrality a baseline expectation, a Windows-only distributed object protocol was a hard sell.
Box’s experience with the Unix DCOM project was, by his own account, formative. It gave him “the desire to move away from shared-runtime distributed architectures and embrace data-centric message passing using XML.” In 1998, he co-authored the original SOAP specification with Bob Atkinson, Gopal Kakivaya, and Dave Winer — a direct repudiation of the distributed-object model he had spent years evangelizing. The same mind that had written the bible of COM was now designing its replacement. Microsoft eventually deprecated DCOM in favor of .NET Remoting, then Windows Communication Foundation, then web services, each step moving further from the distributed-object dream.
Bonobo: CORBA’s Last Stand on the Desktop
The distributed-object vision had one more notable champion. In 1997, Miguel de Icaza launched the GNOME project to create a free desktop environment for Linux. De Icaza, deeply impressed by what OLE 2.0 had achieved on Windows, wanted the same compound-document capability for free software. The result was Bonobo, a component framework that used CORBA as its transport layer, with ORBit — a fast, lightweight CORBA 2.2 ORB — providing the plumbing.
Bonobo powered component embedding in Gnumeric, Evolution, Nautilus, and other GNOME applications. It was, architecturally, an attempt to graft COM’s user-facing vision onto CORBA’s wire protocol. But Bonobo inherited all of CORBA’s complexity without the benefit of a single vendor controlling the entire stack. By the mid-2000s, GNOME quietly deprecated Bonobo in favor of D-Bus, a simpler, message-based IPC system. The distributed-object era had ended on Linux too.
Why Distributed Objects Failed
In 1994 — before CORBA had reached its peak and while DCOM was still on the drawing board — four researchers at Sun Microsystems published a paper that, in retrospect, reads like a prophecy. “A Note on Distributed Computing,” by Jim Waldo, Geoff Wyant, Ann Wollrath, and Sam Kendall, argued that “objects that interact in a distributed system need to be dealt with in ways that are intrinsically different from objects that interact in a single address space.”
The paper identified four properties that separate local from remote computation: latency (remote calls are orders of magnitude slower), memory access (you cannot pass pointers across a network; everything must be serialized), partial failure (one machine can crash while the other keeps running, leaving the system in an indeterminate state), and concurrency (distributed calls are inherently concurrent in ways that local calls are not). Any framework that tried to paper over these differences — that tried to make remote calls “look like” local calls — was, Waldo et al. argued, doomed to fail. Not because it was impossible to make the syntax identical, but because identical syntax would seduce developers into ignoring the semantics. They would write code that assumed calls were fast, reliable, and atomic, and the network would punish them for it.
Both CORBA and DCOM committed exactly this sin. Both used IDL to define interfaces that could be called identically whether the object was local or remote. Both generated stubs that made a network round-trip look like a function call. And both produced systems that were brittle, hard to debug, and prone to cascading failures when the network misbehaved.
The deeper problem was one of coupling. Objects, by their nature, encapsulate state. A distributed object system asks you to pretend that an object’s state is “somewhere” on the network, accessible through method calls, just as a local object’s state is accessible through method calls on a pointer. But state management across a network is a fundamentally different problem. What happens to a transaction if the connection drops between debit() and credit()? What is the correct behavior when QueryInterface succeeds but the subsequent method call times out? Distributed objects inherited object-oriented programming’s emphasis on mutable state and rich interfaces, but the network demands the opposite: statelessness, idempotency, coarse-grained operations, and explicit error handling.
What Survived
The distributed-object era lasted roughly from 1991 to the early 2000s, and by most accounts, it failed. CORBA retreated to legacy niches. DCOM was deprecated. The industry moved first to SOAP and XML (which stripped away the object semantics but kept the IDL-like contracts), then to REST (which stripped away everything but HTTP and JSON), then to gRPC (which, ironically, brought back IDL in the form of Protocol Buffers and binary wire formats — but without the pretense that remote calls were anything like local ones).
But the ideas that CORBA and COM pioneered did not die. They were too good to die.
Interface Definition Languages live on in Protocol Buffers, Thrift, FlatBuffers, and Cap’n Proto. The insight that you should define your contract in a language-neutral schema and generate bindings for each target language — that was CORBA’s IDL, and it was right.
Interface-based programming — the idea that components should interact through well-defined contracts, not concrete implementations — is the foundation of modern dependency injection, plugin architectures, and API design. COM’s QueryInterface was capability discovery before capability discovery had a name.
Binary component models persist wherever we need components written in different languages to interoperate without a shared runtime. WebAssembly’s Component Model, still taking shape, is solving exactly the problem COM solved in 1993: how do you define a binary interface that any language can target?
Reference counting for lifetime management, the core of IUnknown, shows up in Rust’s Arc, Swift’s ARC, and Python’s sys.getrefcount. The pattern was not invented by COM, but COM proved it could work as the sole lifetime-management mechanism for a component ecosystem of staggering scale.
Even the failures are instructive. The lesson of “A Note on Distributed Computing” — that you must not hide the network behind a local-call abstraction — has become distributed-systems orthodoxy. Modern frameworks like gRPC make network calls syntactically convenient but semantically distinct: you pass context objects with deadlines, you handle Status errors that encode network-specific failure modes, you think in terms of streams and messages rather than method calls on stateful objects. The prayer of the 1990s was “make the network invisible.” The hard-won wisdom of the 2000s was “make the network explicit.”
The 1990s believed the future was distributed objects. It wasn’t. But the engineers who built CORBA and COM were not wrong about the problem — they were wrong about the abstraction. The problem of making software components, written in different languages by different teams, interoperate across boundaries remains as urgent as ever. We just stopped pretending that the boundary between “here” and “there” doesn’t matter.