- January 13, 2023
- News
Summary
As we end the second decade of the 21st century, many of our biggest challenges with developing digital infrastructure still originate directly from our inability to creatively collaborate to solve diverse technical issues.

The first industrial revolution bent the curve of human history almost ninety degrees. Driven by innovations like the steam engine, economic and social development was impacted forever. The second industrial revolution was driven by three innovations: electricity, the internal combustion engine, and indoor plumbing. These inventions were important with far reaching impacts. The steam engine, for instance, vastly increased the amount of power available to factories, but it also revolutionized land and sea travel. Electricity lit factories, office buildings, and warehouses and led to innovations like air conditioning.
Both industrial revolutions had one thing in common: new technologies spread across the economic landscape like “Typhoid Mary.” The innovations that enabled the industrial age were “horizontal” in nature – they were pervasive, improved over time, and became enablers that spawned new innovations. The industrial revolution allowed society to overcome the limitations of muscle power (both animal and human) and ushered in the world’s first machine age.
Today, with the rise of digital technologies and systems, we are witnessing a similar phenomenon. Digital innovation is doing for our brain power what the steam engine did for our muscle power.
People understand digital systems are interwoven into nearly every aspect of life and increasingly expect more from the technology they consume. And yet digital architects and developers struggle to build new innovations on top of legacy technologies that simply cannot support the required performance, scale and flexibility necessary for modern digital services. To meet the demand for new digital experiences and deliver technology solutions that improve people’s lives, we need to modernize the foundational elements of our digital infrastructure.
Some basic design principles must be put in place to guide the growth of this vast, distributed system that must remain organized as it evolves according to a logic all its own. It demands that we think about architecture in a completely new way, one that addresses the limitations of today’s hardware, software and network technologies not addressed by current IT and telecom developments.
Future: Requirements for digital infrastructure
For the last ten years or so, the evolution of digital systems has been largely comprised of moving data and workloads from physical hardware to virtual platforms. Developers have moved the data center to the cloud and businesses have transitioned from managing their own computing and network assets to “everything as a service.” We have transformed diverse processes from human-mediated manual workflows to automation supported by cloud services and mobile apps. Digital innovations have transformed the way we work with distributed, low-cost access to content and information services.
Enabling these digital innovations has forced developers to move compute, storage and networking resources from their traditionally centralized locations, such as data centers and clouds, to decentralized or distributed [edge] locations that are closer to where data is generated and consumed. This architectural shift is changing the economics of information systems, reducing the costs and significantly increasing the flexibility of computing and networking resources. This, in turn, is changing how businesses operate. The shift to distributed systems is enabling a much larger number of businesses across the economy to access powerful new tools such as AI, IoT and new personalized microservices.
The underlying technologies that are enabling more complex adaptive systems are all in some way trying to break from today’s software and computing paradigms. To reach this goal, a new view of architecture is required in order to overcome today’s performance limitations. What are the major obstacles that need to be overcome?
- First, we need to achieve more distributed and much more intelligent infrastructure where more powerful and pervasive networking and compute resources are deeply embedded into our everyday lives to overcome infrastructural limitations that constrain new more dynamic software applications and services.
- Second, emerging applications are increasingly running on specialized hardware platforms such as wearables and VR headsets, application-specific integrated circuits and accelerators for AI and inferencing workloads, as well as field-programmable gate arrays for specialized use cases such as software-defined networking devices. This requires new development tools that can abstract and reduce development complexity to enable application-specific hardware devices to work seamlessly with software.
- Third, software applications today have become distributed, needing to work across multiple clouds, at the edge, and in diverse devices supporting multi-modal interfaces including voice, wearables, touch and AR/VR in addition to web and mobile. Emerging software applications will increasingly need to support new and novel user experiences.
- Fourth, we need better ways to manage data interactions and eliminate data boundaries — getting the application data close to its point of use while managing changes in the data quickly and consistently to enable fast, reliable and trusted application experiences — connecting users with the right code and the right data at the right time requires intelligent orchestration of application traffic and workloads across dynamic and distributed users and applications.
Harmonizing and aligning digital infrastructure technologies
Evolution of silicon and electronics hardware to enable AI, ML, & VR use cases
In a world driven by data analytics and diverse workloads, the need for compute performance has spurred an onslaught of competing processor designs and architectures to help accelerate digital services.The days of ‘one size fits all’ processors and hardware are gone. Now we see more and more specialized hardware designs to address very application-specific compute requirements. CPUs are being combined with GPUs along with FPGA solutions, ASICs and custom designs to accelerate compute performance for applications and use cases such as artificial intelligence (AI) and machine learning (ML) workloads and new multi-modal user experiences including augmented reality (AR) and virtual reality (VR).
The past decade has seen an explosion in processor parallelism with the move to multicores. The next major architectural trend is the move from symmetric multiprocessors to heterogeneous systems, with multiple different processors and memories. These new silicon architectures enable the acceleration of data-parallel applications such as deep learning, image and video processing, and financial applications.
Developing the software to enable new higher performance application solutions requires better tools and processes to address the rapidly rising levels of complexity. What causes these complexities?
- New intelligent hardware devices as well as cluster nodes in data centers are becoming heterogeneous (multi-core CPU and GPU) in order to increase their compute capabilities and performance;
- Code executing at the level of a node or device constantly needs to be ported to new architectures and accelerators to continue to improve performance; and,
- Most existing code is written for sequential execution, over-optimized for a particular architecture and difficult to port.
All of this creates significant challenges to efficiently migrate code from existing machines to new hardware designs and architectures. As hardware advances and diversifies, we’re entering what observers believe is a new generation of computer architectures. However, every time a new world-changing silicon design is introduced, it comes with a new software stack of tools. The ever-expanding hardware landscape can be daunting for software developers. While hardware innovations have positive impacts on price and functionality, they inevitably lead to spiraling software complexity on the back end.
Software impacts on cloud computing and network performance
The challenges with modernizing digital infrastructure don’t end with hardware innovations. Each new wave of hardware is inevitably tied to and integral with cloud computing and networking performance, all of which requires new software. The exponential pace of these parallel innovations is making software development untenable.
The impacts of these technological advances are twofold: On the one hand, new software technologies are enabling many new business and consumer-focused applications including AI, robotics and Internet of Things (IoT) solutions, as well as augmented reality and virtual reality. On the other hand, they have greatly increased the complexity of software development up and down the technology stack.
Applications are now distributed and dynamic in nature, needing to work across different cloud configurations, diverse locations and devices. Software development needs a completely new approach – one where open-source software development tools and libraries are modular and interchangeable and where applications can be decomposed into microservices and components that can be assembled and then re-used. New development frameworks will need to simplify application development and shorten time to market by better organizing developer tools and run-time support for multi-cloud, edge, IoT and application-specific hardware devices.
Enterprises have diverse users, functions and entities, all with an overabundance of data flows and interactions. Today, many organizations are struggling to turn the operational data generated by their people, machines and fleets into tangible business value. Data is often trapped in machines, equipment and incompatible systems, or stored locally on workstations and drives. Extracting value from diverse data types and disparate data sources requires special skills that are in short supply — cloud server provisioning, data science, multiple programming languages and more.
What’s required are orchestration tools that creatively combine multiple innovations in cloud infrastructure management, workflow automation and data application development to reduce complexity and better leverage architects, developers and integrators. Orchestration has the potential to enable software developers, subject matter experts and business users to rapidly and collaboratively build diverse distributed data applications.
Today, relatively few companies understand the necessity of re-thinking how their applications will be developed, consumed and managed. Even the latest platforms, cloud-native solutions, edge, and IoT device development platforms have shortcomings. They tackle only bits and pieces of the problem, but do not address the overarching complexities of software development, tooling, testing, security and monitoring.
High performance hybrid netwokrs & decentralized smart systems
A new generation of higher performance networks are driving a cycle of decentralization and distribution of digital resources. Powerful distributed technologies such as edge computing, IoT, blockchain, and more demonstrate the power and potential of decentralized systems, relationships and interactions.
However, the reality of this story is Rome wasn’t built in a day. The failure of 5G networks to attain its hoped-for boost in functionality and performance coupled with its high costs (both CAPEX & OPEX) is just one example of this failure. So, how should developers of networking technologies think about wireless and real time networking?
To answer that question, consider the networking that you and everybody else uses every day: wireless local area networking, or Wi-Fi. Wi-Fi “works fine” for your laptop and phone while you’re in your home or office, and if it’s configured correctly, it’s even reasonably secure. But if you think that Wi-Fi is “good reliable networking,” you have another thing coming. Wi-Fi isn’t even slightly “deterministic”—i.e., there is no guarantee of performance at any level.
Similarly, if you’re one of those people who thinks that 5G is going to solve world hunger, you really have another thing coming. The technologies behind 5G enable the use of very high frequencies. The higher the frequency, the shorter the wavelength. Shorter wavelengths enable faster speeds and lower latency. But that’s not the whole story. With shorter wavelengths, the distance between the device and the cell site has to be shorter, and the signal itself can’t penetrate dense materials like walls and trees. To get around these deployment challenges, users will need to deploy vastly more towers and cell sites.
In order to have reasonable coverage, providers must build 5G antennas and towers everywhere, and very close to users. This is a time-consuming and expensive process that will make its rollout slow and uneven. When carriers say they’re rolling out 5G in a city, what they mean is that 5G will be available in limited pockets of that city. The consistent, fast and reliable 5G dream everybody talks about will be available in some offices, entertainment venues and other locations, but not all at once.
We need to better orchestrate network resources to enable robust broadband services that inform diverse applications and use cases. Without virtually every cell site and tower having a fiber optic connection on a highly resilient network, 5G will remain somewhat limited. Right now, there just isn’t enough fiber in the ground.
Digital infrastructure needs more than "bolting" together technologies

In the age of smart distributed systems, simply “bolting together” diverse edge devices ranging from smartphones to IMECS and integrating them onto 5G radio-area networks (RAN) and then trying to integrate these devices with distributed application technologies enabled by micro-container services or with data intensive AI or IoT applications doesn’t add up to a coherent architecture. Similarly, integrating a diverse hodgepodge of technologies will not render a seamless user experience or enable the metaverse environment we are anxiously awaiting.
Digital architecture collaboration
We are at the dawn of a new age of information and infrastructure technology in which distributed networked systems combine with pervasive computing to take the notion that the network is the computer to new heights. These systems will require new hardware designs, better software development tools and more dynamic networks that overcome infrastructural limitations.
Today, multiple parallel technology developments appear to be increasingly reinforcing and accelerating one another. But are these innovations actually aligned and will the building blocks of digital infrastructure really play well together?
Perhaps the most important impact resulting from the shift to digital systems has to do with how these technologies are developed and the architectural relationships between and among the differing technologies. Networking, computing, applications, device management and identity have all, for the most part, evolved in relatively autonomous development paths. We strongly believe this will need to change.
The two biggest challenges with developing digital infrastructure are:1) aligning players and developments with an agreed upon architecture; and, 2) synchronizing developments across the ecosystem of core technology developers. This type of coordination and collaboration requires a subtle dance of alignment, timing and delivery–the ability to see where, and how fast, capabilities are progressing and thus align development work and also manage complexity.
All of the traditional categories of players that are theoretically driving digital infrastructure innovations (IT systems, telecom systems, automation and control systems, software development and more) have historically operated within well-established technical development protocols and business models that reflected the distinct competencies each group has developed over the last 20+ years. In short, they are trapped in their own “sunk cost” economics and blinded to any alternative future beyond the linear development path they have been on.
As the number and diversity of stakeholders expands (users, architects, developers, supporters, etc.), and the volume and nature of their interactions grows, the discrete technologies that comprise digital infrastructure will need to become more and more tightly coupled. Each core technology must be viewed in close proximity to all of the other core technologies and, by necessity, need to be mutually supportive without inhibiting the other core technologies. However, trying to coordinate and align the respective roles of each core technology often creates contention.
We are coming to see the continuously evolving relationship between these core technologies as fertile ground for innovation. They need to be interwoven and mutually supportive to leverage their combined potential. However, trying to coordinate and leverage the respective roles of technology architecture and business architecture often creates contention.
As is often the case, all of this adds up to the need for a new architecture for distributed systems that is a profoundly different architecture than what’s in common use today and new types of relationships that foster cooperation.
The shift to truly intelligent distributed digital systems requires a new generation of architectural capabilities that are designed and built from the ground up. As we end the second decade of the 21st century, many of our biggest challenges with developing digital infrastructure still originate directly from our inability to creatively collaborate to solve diverse technical issues.
This report is from Harbor Research.
Did you enjoy this great article?
Check out our free e-newsletters to read more great articles..
Subscribe