Design Considerations

In CONA, the Orchestration Layer and Convergence Layer are maximally compatible with both the existing and planned SDN and NFV technology roadmaps on the internet, while preserving the individual developmental trajectories of both SDN and NFV. On this basis, the I42 interface is utilized to bridge the capabilities between network control and service orchestration, achieving SDN and NFV synergy. Simultaneously, a computational resource management system is introduced into the overall architecture, primarily addressing the management, modeling, and transaction functions of heterogeneous computational resources, thereby enabling network computational information to intercommunicate between the Infrastructure Layer and Convergence Layer.

The Convergence Layer interacts with the Orchestration Layer through the I43 interface, exchanging virtual resource information such as virtual machines and containers, thus achieving deployment methods on hardware computational resources. Network forwarding and computational resources are integrated into the Infrastructure Layer in this architecture, reflecting the trend of network and computation unity in future network development.

In CONA, personalized and targeted services for computational resource providers, computational service providers, and computational service consumers are also implemented. Computational resource providers mainly open their capabilities through the Convergence Layer, while computational service providers and consumers mainly do so through the Orchestration Layer. Facing specific service providers and consumers, the computational network can provide cloud resources. For providers and consumers of computational resources, a decentralized computational management system is constructed, enabling the computational network to meet computational sharing and trading needs, and to achieve more precise control of computational resources.

Furthermore, in CONA's evolutionary trajectory, collaboration with network operators will be sought as much as possible, enabling network capabilities to be based on SRv6, compatible with both SR-BE and SR-TE modes, primarily relying on network-distributed programmable capabilities. AI business capabilities are based on cloud-native, compatible with virtualization and other modes, and are evolving towards adaptive control of cloud resources, decentralized service governance, and the serverless-ization of application services.

Last updated