This order is not provisional and claims the benefit of the filing date of U.S. Interim Order No. 63/215,264, filed June 25, 2021, the contents of which are incorporated herein by reference in their entirety for all weekends.
This disclosure relates to a framework and routing mechanisms for graphics processing units (GPUs) hosted on multiple host machines in a cloud environment.
Organizations continue to move business applications and databases to the cloud to reduce the cost of acquiring, upgrading, and maintaining on-premises hardware and software. High Performance Computing (HPC) applications constantly consume 100% of the available computing power to achieve a specific result or result. HPC applications require dedicated network performance, fast storage, extensive compute resources, and significant amounts of memory—resources that are scarce in the virtualized infrastructure that makes up today's clouds.
Cloud infrastructure service providers are offering newer and faster CPUs and graphics processing units (GPUs) to meet the demands of HPC applications. Typically, a virtual topology is created to allow the multiple GPUs housed on the different host machines to communicate with each other. In practice, a ring topology is used to connect the different GPUs. Ring networks, however, are inherently blocked. As such, overall system performance is degraded. The embodiments discussed herein address these and other issues related to the connectivity of GPUs that span multiple host machines.
This description generally relates to routing mechanisms for graphics processing units (GPUs) hosted on multiple host machines in a cloud environment. Various embodiments are described herein, including methods, systems, computer-readable non-transitory storage media, stored programs, code or instructions executable by one or more processors, and the like. These illustrative embodiments are mentioned in order not to limit or define the description, but to provide examples that aid in understanding the same. Additional embodiments are discussed in the detailed description section and further description is provided therein.
An embodiment of the present disclosure is directed to a method, comprising: for a packet transmitted by a graphics processing unit (GPU) of a host machine and received by a network device, determining, by the network device, a gateway connection to the network device where the packet is located was received; Identify, per network device, based on a GPU routing policy, an egress port connection that corresponds to the ingress port connection, where the GPU routing policy is preconfigured before receiving the packet, and a mapping of each ingress port connection to the network device a establishes single egress port link of network device; and the network device forwarding the packet on the output port connection of the network device.
One aspect of the present disclosure provides a system that includes one or more data processors and a non-transitory computer-readable storage medium that includes instructions that, when executed on one or more data processors, cause one or more data processors to execute data in part or in full by one or more of the methods described herein.
Another aspect of the present disclosure provides a computer program product, tangibly embodied on a non-transitory machine-readable storage medium, including instructions configured to cause one or more data processors to perform part or all of one or more methods described herein.
The foregoing, along with other features and embodiments, will become more apparent when reference is made to the following specifications, claims and accompanying drawings.
The features, embodiments and advantages of the present description will be better understood when the following detailed description is read with reference to the accompanying drawings.
FIGO.1is a high-level diagram of a distributed environment showing a virtual cloud overlay or network hosted by a cloud service provider infrastructure according to specific modalities.
FIGO.2FIG. 12 illustrates a simplified architectural diagram of the physical components in the physical network within the CSPI, in accordance with certain embodiments.
FIGO.3Figure 12 shows an example setup within CSPI where a host computer is connected to multiple Network Virtualization Devices (NVDs) according to certain modalities.
FIGO.4Figure 1 illustrates connectivity between a host machine and an NVD to provide I/O virtualization to support multi-tenancy in certain embodiments.
FIGO.512 illustrates a simplified block diagram of a physical network provided by a CSPI in accordance with certain embodiments.
FIGO.612 illustrates a simplified block diagram of a cloud infrastructure including a CLOS network arrangement, in accordance with certain embodiments.
FIGO.7FIG. 12 shows an example scenario depicting a flow collision in the cloud infrastructure of FIG.6, according to certain modalities.
FIGO.8represents a policy-based routing mechanism implemented according to certain modalities in the cloud infrastructure.
FIGO.9Figure 12 illustrates a block diagram of a cloud infrastructure, showing different types of connections in the cloud infrastructure according to certain modalities.
FIGO.1012 illustrates an example configuration of a rack included in the cloud infrastructure, in accordance with certain embodiments.
FIGO.11A illustrates a flow chart depicting the steps taken by a network device to route a packet in accordance with certain embodiments.
FIGO.11B illustrates another flow chart depicting the steps taken by a network device to route a packet in accordance with certain embodiments.
FIGO.1212 is a block diagram illustrating a pattern for implementing a cloud infrastructure as a service system, in accordance with at least one embodiment.
FIGO.1312 is a block diagram illustrating another pattern for implementing a cloud infrastructure as a service system, in accordance with at least one embodiment.
FIGO.1412 is a block diagram illustrating another pattern for implementing a cloud infrastructure as a service system, in accordance with at least one embodiment.
FIGO.fifteen12 is a block diagram illustrating another pattern for implementing a cloud infrastructure as a service system, in accordance with at least one embodiment.
FIGO.sixteen12 is a block diagram illustrating an exemplary computer system, in accordance with at least one embodiment.
In the following description, specific details are presented for purposes of explanation in order to provide a thorough understanding of certain modalities. However, it is evident that various modalities can be practiced without these specific details. The figures and description are not intended to be limiting. The word "exemplary" is used herein to mean "serve as an example, example, or illustration." Any modality or project described herein as “exemplary” should not necessarily be construed as preferred or advantageous over any other modality or project.
The term cloud service is generally used to refer to a service that a cloud service provider (CSP) provides to users or customers on demand (e.g. via a subscription model) using systems and infrastructure (data infrastructure ) provided by CSPs. Typically, the servers and systems that make up the CSP infrastructure are separate from the customer's on-premises servers and systems. This allows customers to use the cloud services provided by the CSP without having to purchase separate hardware and software resources for the services. Cloud services are designed to provide the subscriber with easy and scalable access to applications and computing resources without requiring the customer to invest in purchasing the infrastructure used to deliver the services.
There are different cloud service providers offering different types of cloud services. There are different types or models of cloud services including Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS) and others.
A customer may subscribe to one or more cloud services provided by a CSP. The customer can be any entity, such as B. an individual, an organization, a company and the like. When a customer signs up or registers for a service provided by a CSP, an account or contract is created for that customer. The customer can then access one or more cloud resources associated with the account through that account at the subscriber.
As mentioned above, Infrastructure as a Service (IaaS) is a specific type of cloud computing service. In an IaaS model, the CSP provides an infrastructure (known as a Cloud Service Provider Infrastructure, or CSPI) that customers can use to build their own customizable networks and provision customer resources. Therefore, customer networks and resources are hosted in a distributed environment on top of the infrastructure provided by a CSP. This differs from traditional computing, where customer networks and resources are hosted on customer-supplied infrastructure.
CSPI can involve interconnected high-performance computing resources, including multiple host machines, storage resources, and network resources, forming a physical network, also known as the substrate network or underlying network. Resources in CSPI can be distributed across one or more data centers, which can be geographically spread across one or more geographic regions. Virtualization software can run on these physical resources to provide a virtualized distributed environment. Virtualization creates an overlay network (also known as a software-based network, software-defined network, or virtual network) over the physical network. The physical CSPI network provides the underlying foundation for creating one or more overlays or virtual networks on top of the physical network. Virtual or overlay networks can include one or more Virtual Cloud Networks (VCNs). Virtual networks are implemented using software virtualization technologies (e.g., hypervisors, functions performed by network virtualization devices (NVDs) (e.g., smartNICs), top-of-rack (TOR) switches, intelligent TORs, which a or implement multiple functions performed by an NVD and other mechanisms) to create network abstraction layers that can run on the physical network. Virtual networks can take many forms, including peer-to-peer networks, IP networks, and others. Virtual networks are typically Layer 3 IP networks or Layer 2 VLANs. This method of virtual or overlay networking is often referred to as Layer 3 virtual or overlay networking (GRE)), Virtual Extensible LAN (VXLAN – IETF RFC 7348), Virtual Private Networks (VPNs) (e.g. MPLS Layer-3 Virtual Private Networks (RFC 4364)), VMware NSX, GENEVE (Generic Network Virtualization Encapsulation) and others.
For IaaS, the infrastructure provided by a CSP (CSPI) can be configured to deliver virtualized computing resources over a public network (such as the Internet). In an IaaS model, a cloud computing service provider may host infrastructure components (eg, servers, storage devices, network nodes (eg, hardware), implementation software, platform virtualization (eg, a hypervisor layer), or similar). In some cases, an IaaS provider can also provide a variety of services to accompany these infrastructure components (eg, billing, monitoring, logging, security, load balancing and pooling, etc.). Just as these services can be policy driven, IaaS users can implement direct load balancing policies to maintain application availability and performance. CSPI provides infrastructure and a range of complementary cloud services that enable customers to build and run a variety of applications and services in a highly available, hosted, distributed environment. CSPI provides powerful compute and storage capabilities and resources in a flexible virtual network that can be securely accessed from multiple network locations, e.g. B. a customer's local network. When a customer signs up or signs up for an IaaS service provided by a CSP, the tenant created for that customer is a secure, isolated partition within the CSPI where the customer can create, organize, and manage their cloud resources.
Customers can create their own virtual networks using computing, storage and network resources provided by CSPI. One or more customer resources or workloads, e.g. B. computing instances are provided. For example, a customer can use the capabilities provided by CSPI to create one or more customizable private virtual networks called Virtual Cloud Networks (VCNs). A customer can provision one or more customer resources, such as compute instances, in a customer VCN. Compute instances can take the form of virtual machines, bare metal instances, and the like. Therefore, CSPI provides infrastructure and a range of complementary cloud services that enable customers to build and run a wide range of applications and services in a highly available, hosted virtual environment. Customer does not manage or control the underlying physical resources provided by CSPI, but Customer has control over operating systems, storage and deployed applications; and possibly limited control of selected network components (e.g. firewalls).
The CSP can provide a console that allows customers and network administrators to configure, access, and manage cloud-deployed resources using CSPI resources. In certain embodiments, the console provides a web-based user interface that can be used to access and manage CSPI. In some implementations, the console is a web-based application provided by the CSP.
CSPI can support single-tenant or multi-tenant architectures. In a single-tenant architecture, the software (e.g. application, database) or hardware component (e.g. host machine or server) serves a single customer or tenant. In a multi-tenant architecture, a software or hardware component serves multiple clients or users. Therefore, in a multi-tenant architecture, CSPI resources are shared between multiple clients or tenants. In a multi-tenant situation, precautions are taken and safeguards implemented in the CSPI to ensure that each tenant's data remains isolated and invisible to other tenants.
On a physical network, a network endpoint ("endpoint") refers to a device or computer system that connects to a physical network and communicates with the network to which it is connected. A network endpoint on the physical network can be connected to a local area network (LAN), a wide area network (WAN), or another type of physical network. Examples of common endpoints on a physical network include modems, hubs, bridges, switches, routers and other network devices, physical computers (or host machines), and the like. Every physical device on the physical network has a fixed network address that can be used to communicate with the device. This fixed network address can be a layer 2 address (e.g., a MAC address), a fixed layer 3 address (e.g., an IP address), and the like. In a virtualized environment or virtual network, endpoints can span multiple virtual endpoints, e.g. B. Virtual machines hosted on physical network components (e.g. hosted on physical host machines). These endpoints in the virtual network are addressed using overlapping addresses, such as overlapping layer 2 addresses (e.g. overlapping MAC addresses) and overlapping layer 3 addresses (e.g. overlapping IP addresses). Network overlays provide flexibility by allowing network administrators to move overlay addresses associated with network endpoints through software management (eg, software that implements a virtual network control plane). Therefore, unlike a physical network, in a virtual network an overlay address (such as an overlay IP address) can be moved from one end to the other using network management software. Because the virtual network is built on top of a physical network, communication between virtual network components involves both the virtual network and the underlying physical network. To facilitate these communications, CSPI components are configured to learn and store mappings that map overlapping addresses in the virtual network to real physical addresses in the substrate network and vice versa. These mappings are then used to facilitate communication. Client traffic is channelized to facilitate routing within the virtual network.
Consequently, physical addresses (e.g., physical IP addresses) are mapped to components in physical networks, and overlapping addresses (e.g., overlapping IP addresses) are mapped to entities in virtual networks. Both physical IP addresses and overlay IP addresses are types of real IP addresses. They are separate from virtual IP addresses, where one virtual IP address is mapped to multiple real IP addresses. A virtual IP address provides a 1-to-many mapping between the virtual IP address and multiple real IP addresses.
Cloud Infrastructure or CSPI is physically housed in one or more data centers in one or more regions of the world. CSPI can include components in the physical or substrate network and virtualized components (e.g. virtual networks, compute instances, virtual machines, etc.) residing in a virtual network built on top of the physical network components. In certain embodiments, CSPI is organized and hosted in domains, regions, and availability domains. A region is typically a localized geographic area that contains one or more data centers. Regions are often independent of each other and can be separated by large distances, for example between countries or even continents. For example, a first region might be in Australia, another in Japan, another in India, etc. CSPI features are divided into regions, so each region has its own independent subset of CSPI features. Each region can provide a set of core infrastructure resources and services such as: B. Compute resources (e.g. bare metal servers, virtual machines, containers and associated infrastructure, etc.); storage resources (e.g., block volume storage, file storage, object storage, file storage); Network resources (e.g., virtual cloud networks (VCNs), load balancing resources, connections to on-premises networks), database resources; perimeter network resources (e.g. DNS); and resources for access management and monitoring, among other things. Each region usually has several routes connecting it to other regions of the kingdom.
Typically, an application is deployed in a region (i.e. it is deployed on the infrastructure associated with that region) where it is most commonly used because using nearby resources is faster than using distant resources. Applications can also be deployed to different regions for various reasons, e.g. B. Redundancy to mitigate the risk of regional events such as major weather systems or earthquakes to meet different requirements of jurisdictions, tax domains and other business or business criteria .social. That's similar.
Data centers within a region can be organized and divided into Availability Domains (ADs). An availability domain can correspond to one or more data centers in a region. A region can consist of one or more availability domains. In such a distributed environment, CSPI resources are region specific, e.g. B. a virtual cloud network (VCN), or availability domain specific, e.g. B. a computing instance.
ADs within a region are isolated from each other, are fault tolerant, and are configured in such a way that they are very unlikely to fail simultaneously. This is achieved by ADs that do not share critical infrastructure resources such as networks, physical cables, cable trays, cable entry points, etc., so failure of one AD within a region is unlikely to affect the availability of the other AD within that region. same region. ADs within the same region can connect to each other over a low-latency, high-bandwidth network, which can provide high-availability connectivity to other networks (e.g., the Internet, customer local networks, etc.) and create replicated systems across multiple ADs for high availability and disaster recovery. Cloud services use multiple ADs to ensure high availability and protect against resource failures. As the infrastructure provided by the IaaS provider grows, more regions and ADs can be added with additional capacity. Traffic between availability domains is generally encrypted.
In certain embodiments, the regions are grouped into domains. A kingdom is a logical collection of regions. The realms are isolated from each other and do not share data. Regions in the same realm can communicate with each other, but regions in different realms cannot. A customer's tenure or account with the CSP exists in a single domain and may be spread across one or more regions belonging to that domain. Typically, when a customer signs up for an IaaS service, a tenant or account is created for that customer in the region specified by the customer (referred to as "home region") within a domain. A customer can extend the customer's lease to one or more regions within the domain. A client cannot access regions that are not in the region where the client's tenant exists.
An IaaS provider can provide multiple domains, with each domain being provided to a specific set of customers or users. For example, a business domain can be provided for business customers. As another example, a domain for a specific country can be provided to customers in that country. As yet another example, a government sphere may be provided for a government and the like. For example, the government domain may be served by a particular government and have a higher level of security than a commercial domain. For example, Oracle Cloud Infrastructure (OCI) currently offers one domain for commercial regions and two domains (e.g. FedRAMP authorized and IL5 authorized) for government cloud regions.
In certain embodiments, an AD can be divided into one or more fault domains. A fault domain is a grouping of infrastructure resources within an AD to provide anti-affinity. Fault domains allow compute instances to be distributed so that the instances are not on the same physical hardware within a single AD. This is called anti-affinity. A fault domain refers to a set of hardware components (computers, switches, etc.) that share a single point of failure. A calculation group is logically divided into fault domains. Therefore, a hardware failure or compute hardware maintenance event that affects one fault domain does not affect instances in other fault domains. Depending on the modality, the number of fault domains for each AD can vary. For example, in certain embodiments, each AD contains three fault domains. A fault domain acts as a logical data center within an AD.
When a customer signs up for an IaaS service, CSPI resources are provisioned for the customer and associated with the customer's tenant. Customer may use these provided resources to build private networks and provide resources within those networks. Customer networks hosted by CSPI in the cloud are referred to as Virtual Cloud Networks (VCNs). A customer can configure one or more Virtual Cloud Networks (VCNs) using their assigned CSPI resources. A VCN is a software-defined or virtual private network. Customer resources deployed in the customer VCN may include compute instances (e.g., virtual machines, bare metal instances) and other resources. These compute instances can represent various client workloads like applications, load balancers, databases and the like. A compute instance deployed in a VCN may communicate with accessible public endpoints (“Public Endpoints”) on a public network such as the Internet, with other instances in the same VCN, or in other VCNs (e.g., the other Customer VCNs or others) - Customer VCNs), with customer data centers or on-premises networks, and with service endpoints and other types of endpoints.
CSP can provide many services with CSPI. In some cases, CSPI customers can act as service providers themselves and provide services using CSPI resources. A service provider may disclose a service endpoint that is identified by identifying information (such as an IP address, DNS name, and port). A client's resource (e.g., a computing instance) can consume a particular service by accessing a service endpoint provided by the service for that particular service. These service endpoints are typically endpoints that are publicly accessible to users through public IP addresses associated with the endpoints over a public communications network such as the Internet. Publicly accessible network endpoints are also referred to as public endpoints.
In certain embodiments, a service provider may provide a service through an endpoint (sometimes referred to as a service endpoint) to the service. Clients of the service can use this service endpoint to access the service. In certain implementations, a service endpoint provided for a service can be accessed by multiple clients intending to use that service. In other implementations, a dedicated service endpoint can be provided for a client so that only that client can access the service using that dedicated service endpoint.
In certain embodiments, when a VCN is created, it is associated with a private overlay classless inter-domain routing (CIDR) address space, which is a range of private overlapping IP addresses assigned to the VCN (e.g., 10.0 /16 ). A VCN includes subnets, route tables, and associated gateways. A VCN resides in a single region, but can span one, multiple, or all of the region's availability domains. A gateway is a virtual interface configured on a VCN that allows traffic to and from the VCN to communicate with one or more endpoints outside the VCN. One or more different types of gateways can be configured for a VCN to enable communication to and from different types of endpoints.
A VCN can be divided into one or more subnets, for example, one or more subnets. Thus, a subnet is a configuration unit or division that can be created within a VCN. A VCN can have one or more subnets. Each subnet within a VCN is associated with a contiguous range of overlapping IP addresses (e.g. 10.0.0.0/24 and 10.0.1.0/24) that do not overlap with other subnets in that VCN and represent a subset of the address space. Addresses within the VCN's address space.
Each compute instance is associated with a virtual network interface card (VNIC) that allows the compute instance to participate in a subnet of a VCN. A VNIC is a logical representation of the physical network interface card (NIC). In general, a VNIC is an interface between an entity (eg, a compute instance, a service) and a virtual network. A VNIC exists on a subnet, has one or more associated IP addresses, and associated security rules or policies. A VNIC corresponds to a Layer 2 port on a switch. A VNIC is associated with a compute instance and subnet within a VCN. A VNIC associated with a compute instance allows the compute instance to be part of a subnet of a VCN and allows the compute instance to communicate (e.g., send and receive packets) with endpoints that are on the same subnet as the compute instance. the computing instance. , with endpoints on different subnets within the VCN, or with endpoints outside of the VCN. The VNIC associated with a compute instance determines how the compute instance connects to endpoints inside and outside the VCN. A VNIC for a compute instance is created and associated with that compute instance when the compute instance is created and added to a subnet within a VCN. For a subnet that includes a set of compute entities, the subnet includes the VNICs that correspond to the set of compute entities, where each VNIC is connected to a compute entity within the set of compute entities.
Each compute instance is assigned a private overlay IP address through the VNIC associated with the compute instance. This private overlay IP address is assigned to the VNIC associated with the compute instance when the compute instance is created and is used to route traffic to and from the compute instance. All VNICs on a given subnet use the same route table, security lists, and DHCP options. As described above, each subnet in a VCN is associated with a contiguous range of overlapping IP addresses (e.g. 10.0.0.0/24 and 10.0.1.0/24) that do not overlap with other subnets in that VCN and that contain an A represent a subset of the address space within the address space of the VCN. For a VNIC on a given subnet of a VCN, the overlay private IP address assigned to the VNIC is one from the contiguous range of overlapping IP addresses assigned to the subnet.
In certain embodiments, a computing entity may optionally receive additional overlay IP addresses, such as one or more public IP addresses, in addition to the private overlay IP address if it is located on a public subnet. These multiple addresses are assigned on the same VNIC or on multiple VNICs associated with the compute instance. However, each instance has a primary VNIC that is created during instance launch and is mapped to the private overlay IP address assigned to the instance. this primary VNIC cannot be removed. Additional VNICs, called secondary VNICs, can be added to an existing instance in the same availability domain as the primary VNIC. All VNICs are in the same instance availability domain. A secondary VNIC can be on a subnet in the same VCN as the primary VNIC, or on a different subnet that's in the same or a different VCN.
A compute instance can optionally be given a public IP address if it is on a public subnet. A subnet can be designated as a public subnet or a private subnet at the time the subnet is created. A private subnet means that resources (such as compute instances) and associated VNICs on the subnet cannot have overlapping public IP addresses. A public subnet means that the resources and associated VNICs on the subnet can have public IP addresses. A customer can specify that a subnet exist in a single availability domain or multiple availability domains within a region or domain.
As described above, a VCN can be divided into one or more subnets. In certain embodiments, a virtual router (VR) configured for the VCN (referred to as a VR-VCN or simply VR) enables communication between the subnets of the VCN. For a subnet within a VCN, the VR represents a logical gateway to that subnet, allowing the subnet (i.e. compute instances within that subnet) to communicate with endpoints on other subnets within the VCN and with other endpoints outside of the VCN. The VCN VR is a logical entity that is configured to route traffic between the VNICs in the VCN and the virtual gateways ("gateways") associated with the VCN. Gateways are described below in connection with FIG.1. A VR VCN is a layer 3/IP layer concept. In one embodiment, there is a VCN VR for a VCN, where the VCN VR potentially has an unlimited number of ports addressed by IP addresses, with one port for each VCN subnet. This way, the VR-VCN has a different IP address for each subnet of the VCN that the VR-VCN is connected to. The VR is also connected to the various gateways configured for a VCN. In certain embodiments, a particular overlapping IP address from the range of overlapping IP addresses for a subnet is reserved for a VCN VR port for that subnet. For example, consider a VCN with two subnets with associated address ranges of 10.0/16 and 10.1/16, respectively. For the first subnet within the VCN with the address range 10.0/16, an address from this range is reserved for a VCN VR port for this subnet. In some cases, the first IP address in the range may be reserved for the VCN VR. For example, for a subnet with an overlapping IP address range of 10.0/16, the IP address 10.0.0.1 can be reserved for a VCN VR port on that subnet. For the second subnet within the same VCN with address range 10.1/16, the VR VCN can have a port for this second subnet with IP address 10.1.0.1. The VR VCN has a different IP address for each subnet of the VCN.
In some other embodiments, each subnet within a VCN may have its own associated VR, addressable by the subnet using a default or reserved IP address associated with the VR. For example, the reserved or default IP address may be the first IP address in the range of IP addresses associated with that subnet. VNICs on the subnet can communicate (e.g. send and receive packets) with the VR associated with the subnet using this default or reserved IP address. In such an embodiment, the VR is the entry/exit point for that subnet. The VR associated with a subnet within the VCN can communicate with other VRs associated with other subnets within the VCN. VRs can also communicate with gateways associated with the VCN. The VR role for a subnet is performed or is performed by one or more NVDs running the VNIC-to-VNIC functionality on the subnet.
Routing tables, security rules, and DHCP options can be configured for a VCN. Route tables are virtual route tables for the VCN and contain rules for routing traffic from subnets inside the VCN to destinations outside the VCN through specially configured instances or gateways. A VCN's routing tables can be adjusted to control how packets are forwarded/routed to and from the VCN. DHCP options refer to the configuration information that is automatically made available to instances when they start.
The security rules configured for a VCN represent overlay firewall rules for the VCN. Security rules can include inbound and outbound rules and specify the types of traffic (e.g. based on protocol and port) going in and out of instances within the VCN allowed to. The client can choose whether a given rule is stateful or stateless. For example, the client can allow inbound SSH traffic from anywhere to a set of instances by configuring a robust ingress rule with source CIDR 0.0.0.0/0 and destination TCP port 22. Security rules can be implemented using network security groups or security lists. A network security group is a set of security rules that apply only to resources in that group. A safelist, on the other hand, contains rules that apply to all resources on any subnet that uses the safelist. A VCN can receive a default security list with default security rules. DHCP options configured for a VCN provide configuration information that is automatically provided to instances in the VCN when the instances start.
In certain embodiments, the configuration information for a VCN is determined and stored by a VCN's control plane. For example, configuration information for a VCN can include information about: the address range associated with the VCN, subnets within the VCN and related information, one or more VRs associated with the VCN, compute instances on the VCN and associated VNICs, NVD performance, the various virtualization network features (e.g., (e.g., VNICs, VRs, gateways) associated with the VCN, state information for the VCN, and other information related to the VCN. In certain embodiments, a VCN distribution service publishes the configuration information stored by the VCN control plane or parts thereof on the NVDs. The distributed information can be used to update information (e.g., forwarding tables, routing tables, etc.) stored and used by NVDs to forward packets to and from compute instances in the VCN.
In certain embodiments, VCN and subnet creation is handled by a VCN control plane (CP), and compute instance startup is handled by a computational control plane. The compute control plane is responsible for allocating the physical resources to the compute instance and then invokes the VCN control plane to create and attach VNICs to the compute instance. The VCN's CP also sends images of the VCN's data to the VCN's data plane, which is configured to perform routing and packet forwarding functions. In certain embodiments, the VCN's CP provides a dispatch service that is responsible for providing updates to the VCN's data plan. In FIG.12,13,14, mififteen(see references1216,1316,1416, mi1516) and is described below.
A customer can create one or more VCNs using CSPI hosted resources. A compute instance deployed in a customer VCN can communicate with different endpoints. These endpoints can include endpoints hosted by the CSPI and endpoints outside of the CSPI.
In FIG.1,2,3,4,5,12,13,14, mififteenThey described below. COWARD.1is a high-level diagram of a distributed environment100Figure 12 shows an overlay hosted on CSPI or a customer VCN in certain embodiments. The ones shown in FIG.1includes several components in the overlay network. distributed environment100shown in fig.1is exemplary only and is not intended to unduly limit the scope of the claimed embodiments. Many variations, alternatives and modifications are possible. For example, in some implementations, the distributed environment depicted in FIG.1may have more or fewer systems or components than those shown in Fig.1, You can mix two or more systems, or you can have a different system configuration or arrangement.
As in the one shown in FIG.1, distributed environment100contains CSPI101which provides services and features that customers can subscribe to and use to create their Virtual Cloud Networks (VCNs). In certain embodiments, CSPI101offers IaaS services to subscribers. Data centers within CSPI101it can be organized into one or more regions. A sample region "US region"102is in FIG.1. A customer has configured a customer VCN104in die Region102. Customer can provision multiple compute instances in VCN104, where compute instances can contain virtual machines or bare metal instances. Examples of instances are applications, databases, load balancers and the like.
In the in fig.1, VCN-Client104consists of two subnets, namely "Subnet1' and 'Subnet2', each subnet with its own range of IP CIDR addresses. In Fig.1, the overlapping IP address range for the subnet1is 10.0/16 and is the address range for the subnet2is 10.1/16. A virtual VCN router105represents a logical gateway for the VCN, allowing communication between the subnets of the VCN104and with other endpoints outside of the VCN. VCN VR105is configured to route traffic between VNICs in the VCN104and gateways associated with the VCN104. VCN RV105provides a port for each VCN subnet104. For example a mobile home105can provide the port with IP address 10.0.0.1 for the subnet1and a port with IP address 10.1.0.1 for the subnet2.
Multiple compute instances may be deployed in each subnet, where the compute instances may be VM instances and/or bare metal instances. Compute instances on a subnet can be hosted by one or more host computers within the CSPI101. A compute instance participates in a subnet through a VNIC associated with the compute instance. For example, as shown in FIG.1, a compute instance of C1is part of the subnet1through a VNIC associated with the compute instance. Calculate instance C in the same way2is part of the subnet1via a VNIC attached to C2. Likewise, multiple compute instances, which can be virtual machines or bare metal instances, can be part of the subnet.1. Each compute instance is assigned a private overlay IP address and MAC address through its associated VNIC. For example in Fig.1, compute instance C1has an overlapping IP address of 10.0.0.2 and a MAC address of M1, while the instance of C is being computed2has a private overlapping IP address of 10.0.0.3 and a MAC address of M2. Each compute instance in the subnet1, including compute instances of C1mi c2, has a default route to VCN VR105using the IP address 10.0.0.1, which is the IP address of a port in the VR VCN105to the subnet1.
Sub-rede-2You can provision multiple compute instances, including VM instances and/or bare metal instances. For example, as shown in FIG.1, compute instances D1Mi D2are part of the subnet2through the VNICs associated with each compute instance. In the in fig.1, compute instance D1has an overridden IP address of 10.1.0.2 and a MAC address of MM1, while instance D is computed2has an overlay private IP address of 10.1.0.3 and a MAC address of MM2. Each compute instance in the subnet2, including compute instances D1Mi D2, has a default route to VCN VR105using the IP address 10.1.0.1, which is the IP address of a VCN VR port105to the subnet2.
VCN A104it can also contain one or more load balancers. For example, a load balancer can be deployed to a subnet and configured to distribute traffic across multiple compute instances in the subnet. A load balancer can also be deployed to balance traffic across subnets in the VCN.
A specific compute instance provisioned in the VCN104it can communicate with several different end devices. These endpoints may include endpoints hosted by CSPI200Endpoints for CSPI200. Endpoints hosted by CSPI101may include: an endpoint in the same subnet as the specific compute instance (e.g. communication between two compute instances in the subnet)1); an endpoint in a different subnet but within the same VCN (e.g. communication between a compute instance in the subnet1and a computing instance in the subnet2); an endpoint in another VCN in the same region (e.g. communication between a compute instance with subnets).1and an endpoint in a VCN in the same region106Ö110, Communication between a computing instance in the subnet1and an endpoint in the service network110in the same region); or an endpoint in a VCN in a different region (e.g. communication between a compute instance with subnets)1and an endpoint in a VCN in a different region108). A compute instance in a CSPI-hosted subnet101can also communicate with endpoints not hosted by CSPI101(i.e. they are outside of the CSPI101). These external endpoints include endpoints on the customer's local network.116, terminals on other remote networks hosted in the cloud118, public terminals114accessible over a public network such as the Internet and other end devices.
Communication between compute instances in the same subnet is facilitated by VNICs associated with the source compute instance and the target compute instance. For example, compute the instance C1na sub-rede-1You may want to send packets to compute instance C2na sub-rede-1. For a packet that originates from a source compute instance and is destined for another compute instance in the same subnet, the packet is processed first by the VNIC associated with the source compute instance. The processing performed by the VNIC associated with the source compute entity may include determining the packet's destination information from the packet headers, identifying policies (e.g., security lists) configured on the VNIC associated with the source compute entity next hop of the packet, performing any necessary packet encapsulation/decapsulation functions, and then forwarding/routing the packet to the next hop to facilitate transmission of the packet to its provided destination. If the target compute instance is in the same subnet as the source compute instance, the VNIC associated with the source compute instance is configured to identify the VNIC associated with the target compute instance and the packet for processing forwarded to this VNIC. The VNIC associated with the target compute entity then starts and forwards the packet to the target compute entity.
For a packet to communicate from a compute entity in one subnet to an endpoint in another subnet in the same VCN, the VNICs associated with the source and destination compute entities and the VCN VR facilitate communication. For example, if you calculate instance C1na sub-rede-1in Abb.1wants to send a packet to compute instance D1na sub-rede-2, the packet is processed first by the VNIC associated with compute instance C1. The VNIC associated with Compute Instance C1is configured to forward the packet to the VR VCN105using the default route or port 10.0.0.1 of the VCN VR. VCN VR105is configured to forward the packet to the subnet2via port 10.1.0.1. The packet is then received and processed by the VNIC associated with D1and the VNIC forwards the packet to the computing entity D1.
For a packet to communicate from a compute instance in the VCN104to an endpoint that is outside the VCN104, communication is enabled through the VNIC associated with the source compute instance VCN VR105and gateways associated with the VCN104. One or more types of gateways can be associated with the VCN104. A gateway is an interface between a VCN and another endpoint, where the other endpoint is outside the VCN. A gateway is a Layer 3/IP layer concept and allows a VCN to communicate with endpoints outside the VCN. Therefore, a gateway facilitates the flow of traffic between a VCN and other VCNs or networks. Several different types of gateways can be configured for a VCN to facilitate different types of communication with different types of endpoints. Depending on the gateway, communication can take place via public networks (e.g. the Internet) or private networks. Various communication protocols can be used for this communication.
For example, compute the instance C1may want to communicate with an endpoint outside of the VCN104. The packet can first be processed by the VNIC associated with the source computing entity C1. VNIC processing determines that the packet's destination is outside the subnet1clear1. A VNIC connected to C1can forward packets to VCN VR105for VCN104. VCN RV105then processes the packet and as part of the processing determines a specific gateway associated with the VCN based on the destination of the packet104as the next hop for the packet. VCN VR105You can then forward the packet to the specific identified gateway. For example, if the destination is an endpoint within the customer's local network, the packet can be routed through the VR VCN105for Dynamic Routing Gateway (DRG)122configured for VCN104. The packet can then be forwarded by the gateway to the next hop to facilitate the packet's communication to its intended final destination.
Several different types of gateways can be configured for a VCN. In FIG.1and is described below. In FIG.12,13,14, mififteen(e.g. gateways referred to by reference numbers1234,1236,1238,1334,1336,1338,1434,1436,1438,1534,1536, mi1538) and is described below. As in the FIG.1, a gateway for dynamic routing (DRG)122can be added or linked to the client's VCN104and provides a route to communicate private network traffic between the customer's VCN104and another endpoint, where the other endpoint may be the customer's local area network116, a VCN108in a different region than CSPI101, or other remote cloud networks118not hosted by CSPI101. Customer local network116it could be a customer network or a customer data center built with customer resources. Access to the customer's local network116It's usually very limited. For a customer who has a local customer network116and one or more VCNs104implemented or hosted by CSPI in the cloud101, the customer may want their local area network116and your cloud-based VCN104to be able to communicate with each other. This allows a customer to create a stretched hybrid environment that includes the customer's VCN104hosted by CSPI101You are on site116. GRD122makes this communication possible. To enable such communications, a communication channel124is configured where a channel endpoint is located in the customer's local network116and the other endpoint is in CSPI101and connected to the client VCN104. communication channel124it can be done over public communication networks such as the Internet or private communication networks. Several different communication protocols can be used, e.g. B. IPsec VPN technology over a public communication network such as the Internet, Oracle's FastConnect technology, which uses a private network instead of a public network, and others. The device or equipment on the customer's local network116which forms an end point for the communication channel124is known as Customer Premise Equipment (CPE), such as B.CPE126shown in fig.1. Sin CSPI101On the other hand, the endpoint can be a host computer running DRG122.
In certain embodiments, a remote peering connection (RPC) can be added to a DRG, allowing a client to peer one VCN with another VCN in a different region. With such an RPC, client VCN104can use GRD122connect to a VCN108in another region. DRG122can also be used to communicate with other remote cloud networks118, not hosted by CSPI101like Microsoft Azure Cloud, Amazon AWS Cloud and others.
As in fig.1, an Internet Gateway (IGW)120can be configured for the client VCN104or activate a compute instance in the VCN104to communicate with public terminals114accessible over a public network such as the Internet. IGW1120is a gateway that connects a VCN to a public network such as the Internet. IGW120enables a public subnet (where resources in the public subnet have overlapping public IP addresses) within a VCN, such as B.VCN104, direct access to public terminals112on a public network114like internet. With IGW120, connections can be launched from a subnet within the VCN104or from the internet.
Ein NAT-Gateway (Network Address Translation).128can be configured for the client VCN104and allows cloud resources in the customer VCN that do not have dedicated public overlapping IP addresses to access the internet without exposing those resources to direct inbound internet connections (e.g., L4-L7 connections). This enables a private subnet within a VCN, e.g. B. a private subnet1and VCNs104, with private access to public endpoints on the Internet. With NAT gateways, connections can only be initiated from the private subnet to the public internet and not from the internet to the private subnet.
In certain embodiments, a Service Gateway (SGW)126can be configured for the client VCN104and provides a route for private network traffic between the VCNs104and service terminals supported in a service network110. In certain embodiments, the service network110it can be provided by the CSP and can provide various services. An example of such a service network is the Oracle Service Network, which provides various services for customers to use. For example, a compute instance (e.g. a DB system) in a private subnet of the customer's VCN104You can back up data to an endpoint service (such as object storage) without requiring public IP addresses or internet access. In certain embodiments, a VCN can only have one SGW, and connections can only be initiated from a subnet within the VCN and not from the serving network.110. In general, when a VCN peers with another VCN, the resources of the other VCN cannot access the SGW. Resources on on-premises networks connected to a VCN through FastConnect or VPN Connect can also use the service gateway configured for that VCN.
In certain implementations, SGW126uses the concept of a Classless Inter-Domain Routing (CIDR) service tag, which is a string representing all regional public IP address ranges for the service or service group of interest. The client uses the service's CIDR tag when configuring the SGW and associated routing rules to control traffic to the service. The customer can optionally use it when configuring security rules without having to adapt them if the service's public IP addresses change in the future.
A local peering gateway (LPG)132is a gateway that can be added to the customer's VCN104and enable the VCN104to peer with another VCN in the same region. Peering means that VCNs communicate using private IP addresses without traversing a public network such as the Internet or routing traffic through the customer's local network.116. In preferred embodiments, a VCN has a separate LPG for each peering it establishes. Local peering or VCN peering is a common practice used to establish network connectivity between different applications or infrastructure management functions.
service providers such as B. Service providers in the supply network110, you can provide access to the Services using different access models. In a public access model, services may be exposed as public endpoints that are publicly accessible from a compute entity in a customer VCN over a public network such as the Internet and/or are privately accessed by SGW.126. Under a specific private access model, services are available as private IP endpoints on a private subnet within the customer's VCN. This is known as Private Terminal (PE) access and allows a service provider to provision its service as an instance on the customer's private network. A private endpoint resource represents a service within the customer's VCN. Each PE manifests itself as a VNIC (referred to as a PE-VNIC, with one or more private IP addresses) on a customer-chosen subnet within the customer's VCN. Therefore, a PE provides a way to provide service within a residential customer's VCN subnet using a VNIC. Because the endpoint is exposed as a VNIC, all functionality associated with a VNIC, such as B. routing rules, security lists, etc., are now available for the PE-VNIC.
A service provider can register its service to allow access via a PE. The provider can assign policies to the service that limit the visibility of the service to customer locations. A provider can register multiple services under a single virtual IP address (VIP), especially for multi-tenant services. There can be multiple private endpoints (in multiple VCNs) representing the same service.
Compute instances in the private subnet can use the private IP address of the PE VNIC or the service's DNS name to access the service. Compute instances in the customer VCN can access the service by sending traffic to the private IP address of the PE in the customer VCN. A Private Access Gateway (PAGW)130is a gateway resource that can be attached to a service provider VCN (for example, a VCN on the service network110) that acts as the entry/exit point for all traffic to/from private endpoints on the customer's subnet. PAGW130allows a provider to scale the number of PE connections without using its internal IP address resources. A provider only needs to configure one PAGW for any number of registered services in a single VCN. Providers can present a service as a private endpoint in multiple VCNs of one or more clients. From the client's perspective, the PE VNIC that isn't bound to a client instance appears to be bound to the service that the client wants to interact with. Traffic destined for the private end is routed through the PAGW130Service. These are referred to as private client-to-service connections (C2S connections).
The PE concept can also be used to extend the service's private access to the customer's on-premises networks and data centers, allowing traffic to flow over FastConnect/IPsec connections and the private endpoint in the customer's VCN. Access to private services can also be extended to the customer's interconnected VCNs, allowing traffic to flow between LPGs132and the PE in the customer VCN.
A client can control routing in a VCN at the subnet level, allowing the client to specify which subnets in the client's VCN, e.g. B. VCN, are present104, use any gateway. A VCN's route tables are used to decide whether to allow traffic from a VCN through a particular gateway. For example, in a specific case, a route table for a public subnet in the customer's VCN104can send non-local traffic through IGW120. The route table for a private subnet in the same VCN as the client104may send traffic destined for CSP services over SGW126. All remaining traffic can be sent through the NAT gateway128. Route tables only control traffic leaving a VCN.
Security lists associated with a VCN are used to control traffic entering a VCN through a gateway on inbound connections. All resources on a subnet use the same route table and security lists. Security lists can be used to control specific types of traffic allowed in and out of instances within a VCN's subnet. Safe list rules can include inbound and outbound rules. For example, an inbound rule can specify an allowed source address range, while an outbound rule can specify an allowed destination address range. Security rules can be a specific protocol (e.g. TCP, ICMP), a specific port (e.g.22for SSH,3389for Windows RDP), etc. In certain implementations, an instance's operating system can apply its own firewall rules based on the rules in the safe list. Rules can be stateful (for example, tracing a connection and allowing the response automatically without an explicit response traffic whitelist rule) or stateless.
Access from a customer VCN (i.e. via a resource or compute instance provisioned in the VCN)104) can be categorized as public access, private access, or dedicated access. Public access refers to an access model that uses a public IP address, or NAT, to access a public endpoint. Private access enables client workloads in the VCN104with private IP addresses (such as resources on a private subnet) to access services without traversing a public network such as the Internet. In certain embodiments, CSPI101Allows workloads in the customer VCN with private IP addresses to access services (public service endpoints from) through a service gateway. Therefore, a service gateway provides a private access model by creating a virtual connection between the customer's VCN and the service's public endpoint, which is outside the customer's private network.
Additionally, CSPI can provide dedicated public access using technologies such as FastConnect public peering, where local client instances can access one or more services within a customer VCN over a FastConnect connection without traversing a public network such as the Internet. CSPI can also provide dedicated private access with FastConnect Private Peering, where on-premises customer instances with private IP addresses can access customer VCN workloads over a FastConnect connection. FastConnect is an alternative to network connectivity to use the public internet to connect a customer's local area network to CSPI and its services. FastConnect offers a simple, flexible, and affordable way to create a dedicated private connection with higher bandwidth options and a more reliable and consistent network experience compared to Internet-based connections.
FIGO.1and the accompanying description above describes various virtualized components in an example virtual network. As described above, the virtual network is built on top of the underlying physical network or substrate. COWARD.2represents a simplified architectural diagram of the physical components in the physical network within the CSPI200According to certain modalities, they form the basis for the virtual network. As shown, CSPI200provides a distributed environment that includes components and resources (such as compute, memory, and network resources) provided by a cloud service provider (CSP). These components and functions are used to provide cloud services (e.g. IaaS services) to subscribing customers, ie customers who have subscribed to one or more services provided by the CSP. Based on the services subscribed to by a customer, a subset of CSPI resources (e.g., compute, memory, and network resources)200are made available to the client. Customers can create their own customizable, private, cloud-based (i.e., hosted by CSPI) virtual networks using physical computing, storage, and network resources provided by CSPI.200. As mentioned above, these customer networks are referred to as Virtual Cloud Networks (VCNs). A customer can provision one or more customer resources in these customer VCNs, e.g. B. Computing instances. Compute instances can come in the form of virtual machines, bare metal instances, and the like. CSPI200provides infrastructure and a range of complementary cloud services that enable customers to build and run a variety of applications and services in a highly available hosted environment.
In the case shown in FIG.2, the physical components of the CSPI200include one or more physical host computers or physical servers (e.g.202,206,208), Network Virtualization Devices (NVDs) (e.g.210,212), Top-of-Frame (TOR)-Switches (z. B.214,216) and a physical network (e.g.218) and changes to the physical network218. Physical host machines or servers can host and run multiple compute instances participating in one or more subnets of a VCN. Compute instances can include VM instances and bare metal instances. For example, the various calculation cases shown in FIG.1it can be hosted by the physical host machines shown in FIG.2. Virtual machine compute instances in a VCN can run on one host machine or on multiple different host machines. Physical host machines can also host virtual host machines, container-based hosts or roles, and the like. The ones shown in FIG.1can be performed by the NVDs illustrated in FIG.2. Die in FIG.1it can be executed by the host machines and/or by the NVDs shown in FIG.2.
Host machines or servers can run a hypervisor (also called Virtual Machine Monitor or VMM) that creates and activates a virtualized environment on the host machines. Virtualization or virtualized environment facilitates cloud-based computing. One or more compute instances can be created, run, and managed on a host computer by using a hypervisor on that host computer. The hypervisor on a host machine enables the sharing of the host machine's physical computing resources (e.g., compute, memory, and network resources) among the various computing instances running on the host machine.
For example, as shown in FIG.2, host machines202mi208run hypervisors260mi266, These hypervisors may be implemented using software, firmware, or hardware, or combinations thereof. Typically, a hypervisor is a software layer or process that sits on top of the host machine's operating system (OS), which in turn runs on the host machine's hardware processors. The hypervisor provides a virtualized environment by allowing the physical computing resources (e.g. processing resources like processors/cores, memory resources, network resources) of the host machine to be shared between the various computing instances of the running virtual machine, the host machine. host. For example in Fig.2, Hypervisor260can sit on the operating system of the host computer202and activates computing resources (e.g., processing capabilities, memory, and network) of the host machine202shared between compute instances (e.g. virtual machines) running on the host machine202. A virtual machine can have its own operating system (referred to as a guest operating system), which can be the same as or different from the host machine's operating system. The operating system of a virtual machine running a host machine can be the same as or different from the operating system of another virtual machine running the same host machine. Therefore, a hypervisor allows multiple operating systems to run in parallel while sharing the same computing resources as the host computer. The ones shown in FIG.2They can have the same or different types of hypervisors.
A compute instance can be a VM instance or a full instance. In Fig.2, compute instances268on the host machine202mi274on the host machine208are examples of virtual machine instances. host machine206is an example of a full instance served to a client.
In certain cases, an entire host machine may be provisioned for a single customer and all compute instances (virtual machines or full instances) hosted by that host machine are owned by the same customer. In other cases, a host computer can be shared by multiple clients (i.e., multiple tenants). In this multi-tenancy scenario, a host machine can host compute instances of virtual machines owned by different customers. These compute entities may be members of different VCNs from different customers. In certain embodiments, a full compute instance is hosted on a full server with no hypervisor. When a bare metal compute instance is deployed, a single customer or tenant retains control of the physical CPU, memory, and network interfaces of the host computer hosting the bare metal instance, and the host computer is not shared with other customers or tenants divided.
As described above, each compute entity that is part of a VCN is associated with a VNIC that allows the compute entity to become a member of a subnet of the VCN. The VNIC associated with a compute instance facilitates the communication of packets or frames to and from the compute instance. A VNIC is associated with a compute instance when the compute instance is created. In certain embodiments, for a compute instance being executed by a host machine, the VNIC associated with that compute instance is run by an NVD attached to the host machine. For example in Fig.2, host machine202Run a compute instance of a virtual machine268associated with the VNIC276, south of VNIC276is managed by NVD210connected to the host computer202. As another example, bare metal instance272hosted by host machine206is associated with the VNIC280responsible for NVD212connected to the host computer206. . . . As another example VNIC284is associated with the compute instance274executed by the host computer208, south of VNIC284is managed by NVD212connected to the host computer208.
For compute instances hosted on a host computer, an NVD attached to that host computer also runs VCN VRs that correspond to the VCNs to which the compute instances belong. In the in FIG.2, NVD210Run VR VCN277corresponding to the VCN from which the compute instance is running268is NVD member212You can also run one or more VR VCNs283corresponding to VCNs corresponding to compute instances hosted by host machines206mi208.
A host machine may contain one or more network interface cards (NICs) that allow the host machine to connect to other devices. A NIC on a host machine can provide one or more ports (or interfaces) that allow the host machine to communicatively connect to another device. For example, a host machine can connect to an NVD using one or more ports (or interfaces) provided on the host machine and the NVD. A host machine can also connect to other devices like another host machine.
For example in Fig.2, host machine202is connected to the NVD210with link220extending between a door234provided by a network card232from the host machine202and between a port236out of NVD210. host machine206is connected to the NVD212with link224extending between a door246provided by a network card244from the host machine206and between a port248out of NVD212. host machine208is connected to the NVD212with link226extending between a door252provided by a network card250from the host machine208and between a port254out of NVD212.
The NVDs, in turn, are connected via communication links to TOR (top-of-the-rack) switches that are connected to the physical network.218(aka switch fabric). In certain embodiments, the connections between a host machine and an NVD and between an NVD and a TOR switch are Ethernet connections. For example in Fig.2, NVD210mi212are connected to TOR switches214mi216, or using links228mi230. In certain embodiments, the links are220,224,226,228, mi230they are ethernet connections. The collection of host machines and NVDs connected to a TOR is sometimes referred to as a rack.
physical network218provides a communication structure that allows TOR switches to communicate with each other. physical network218it can be a multi-layer network. In certain implementations, the physical network218is a multi-layer Clos switch network using TOR switches214mi216represent the nodes of the leaf level of the physical multi-level, multi-node switching network218. Various Clos network configurations are possible, including but not limited to a 2-layer network, a 3-layer network, a 4-layer network, a 5-layer network, and in general an "n"-layer network Network. An example of a Clos network is shown in FIG.5and is described below.
Several different connection configurations are possible between host machines and NVDs, e.g. B. One-to-one configuration, many-to-one configuration, one-to-many configuration and others. In a one-to-one setup deployment, each host computer is connected to its own separate NVD. For example in Fig.2, host machine202is connected to the NVD210via network card232from the host machine202. In a many-to-one configuration, multiple host machines are connected to one NVD. For example in Fig.2, host machines206mi208connected to the same NVD212via network cards244mi250, or.
In a one-to-many configuration, a host computer is connected to multiple NVDs. COWARD.3shows an example within CSPI300where a host machine is connected to multiple NVDs. As in fig.3, host machine302consists of a network interface card (NIC)304which contains multiple ports306mi308. host machine300is connected to a first NVD310through the door306electronic connection320and connected to a second NVD312through the door308electronic connection322. doors306mi308can be ethernet ports and links320mi322between host machine302and NVD310mi312It could be Ethernet connections. NVD310in turn is connected to a first GATE switch314the NWD312is connected to a second TOR switch316. The connections between the NVD310mi312and gate switch314mi316can be Ethernet connections. TOR switch314mi316represent the level0multi-layer physical network switching devices318.
The arrangement shown in fig.3provides two separate physical network paths to and from the physical switch network318to house the machine302: a first path through the TOR switch314for NVD310to house the machine302, and a second route that goes through the TOR switch316for NVD312to house the machine302. Separate paths provide improved availability (referred to as high availability) of the host computer302. If problems occur on one of the paths (e.g. a connection on one of the paths fails) or devices (e.g. a specific NVD has failed), the other path can be used for communication to/from the host machine.302.
In the case of Fig.3, the host computer is connected to two different NVDs through two different ports provided by a host computer NIC. In other embodiments, a host may include multiple NICs that enable connectivity from the host to multiple NVDs.
Referring to Fig.2an NVD is a physical device or component that performs one or more network and/or storage virtualization functions. An NVD can be any device with one or more processing units (e.g. CPUs, network processing units (NPUs), FPGAs, packet processing pipelines, etc.), memory, including cache, and ports. The various virtualization functions can be performed by software/firmware executed by one or more NVD processing units.
An NVD can be implemented in many different ways. For example, in certain embodiments, an NVD is implemented as an interface card called a smartNIC or smart MC with an integrated processor. A smartNIC is a separate device from the NICs on the host machines. In Fig.2, los NVD210mi212can be deployed as smartNICs attached to host machines202and host machines206mi208, or.
However, a smartNIC is just one example of an NVD implementation. Various other implementations are possible. For example, in some other implementations, an NVD or one or more functions performed by the NVD may be integrated into or performed by one or more host machines, one or more TOR switches, and other CSPI components.200. For example, an NVD can be embedded in a host machine, with the functions performed by an NVD being performed by the host machine. As another example, an NVD can be part of a TOR switch, or a TOR switch can be configured to perform functions performed by an NVD that enable the TOR switch to perform various complex packet transformations used for a public cloud . A TOR that performs the functions of an NVD is sometimes referred to as an intelligent TOR. In other implementations, where customers are offered virtual machine (VM) instances but not bare metal (BM) instances, the functions performed by an NVD can be implemented within a hypervisor on the host machine. In some other implementations, some NVD functionality can be offloaded to a centralized service running on a fleet of host machines.
In certain embodiments, such as e.g. when implemented as a smartNIC as shown in FIG.2, an NVD can contain multiple physical ports that allow it to connect to one or more host computers and one or more TOR switches. A port on an NVD can be classified as either a host port (also known as a "south port") or a TOR or network port (also known as a "north port"). A host-facing NVD port is a port used to connect the NVD to a host computer. Examples of host-facing ports in FIG.2include port236I am NVD210and doors248mi254I am NVD212. A network-side port of an NVD is a port used to connect the NVD to a TOR switch. Examples of network-side ports in FIG.2include port256I am NVD210and port258I am NVD212. As in fig.2, NVD210is connected to the TOR switch214with link228extends from the port256out of NVD210for TOR switch214. Likewise NVD212is connected to the TOR switch216with link230extends from the port258out of NVD212for TOR switch216.
An NVD receives packets and frames from a host computer (e.g. packets and frames generated by a compute instance hosted by the host computer) through a host-facing port and can forward the packets to the host after performing the necessary packet processing forward packets and frames to a TOR key via a network-side port of the NVD. An NVD can receive packets and frames from a TOR switch through the NVD's network-side port and, after performing the necessary packet processing, forward the packets and frames to a host computer through a forward-facing port.
In certain embodiments, there may be multiple ports and associated links between an NVD and a TOR switch. These ports and links can be aggregated to form a multi-port or link aggregator group (referred to as a LAG). Link aggregation allows multiple physical links between two endpoints (e.g. between an NVD and a TOR switch) to be treated as a single logical link. All physical links in a given LAG can operate in full-duplex mode at the same speed. LAGs help increase the bandwidth and reliability of the connection between two endpoints. If one of the physical links in the LAG fails, traffic is dynamically and transparently reallocated to one of the other physical links in the LAG. Aggregated hard links offer higher bandwidth than each individual link. Multiple ports associated with a LAG are treated as a single logical port. Traffic can be distributed across the different physical links in a LAG. One or more LAGs can be configured between two terminals. The two terminals can be between an NVD switch and TOR, between a host machine and an NVD, and the like.
An NVD implements or performs network virtualization functions. These functions are performed by software/firmware running from NVD. Examples of network virtualization functions include, without limitation: packet encapsulation and decapsulation functions; Functions to create a VCN network; Network policy enforcement features such as B. the functionality of the VCN security list (firewall); Functions that facilitate the routing and forwarding of packets to and from compute entities within a VCN; That's similar. In certain embodiments, upon receipt of a packet, an NVD is configured to execute a packet processing pipeline to process the packet and determine how the packet should be forwarded or routed. As part of this packet processing pipeline, NVD can perform one or more virtual functions associated with the overlay network, such as B. running cis-connected VNICs in the VCN, running a virtual router (VR) associated with the VCN, encapsulating and decapsulating packets. To facilitate forwarding or routing in the virtual network, run specific gateways (e.g. local peering gateway), implement security lists, network security groups, network address translation (NAT) functions (e.g. translation host of public IP to private IP). ). per host), capping features, and other features.
In certain embodiments, the packet processing data path in an NVD may include multiple packet pipes, each consisting of a number of packet conversion steps. In certain implementations, when a packet is received, the packet is analyzed and classified into a single pipeline. The packet is then processed linearly, one stage at a time, until the packet is either dropped or sent over an NVD interface. These tiers provide basic functional packet processing components (e.g. header validation, throttling enforcement, new layer 2 header injection, L4 firewall enforcement, VCN encapsulation/decapsulation, etc.) so that packets can be processed and new functionality. It can be added by creating new stages and inserting them into existing pipelines.
An NVD can perform control plane and data plane functions corresponding to a control plane and data plane of a VCN. In FIG.12,13,14, mififteen(see references1216,1316,1416, mi1516) and is described below. In FIG.12,13,14, mififteen(see references1218,1318,1418, mi1518) and is described below. Control plane functions include functions for configuring a network (e.g. configuring routes and routing tables, configuring VNICs, etc.) that control how data should be routed. In certain embodiments, a VCN control plane is provided that centrally computes all overlap-to-substrate mappings and publishes them to NVDs and virtual network edge devices such as various gateways like DRG, SGW, IGW, etc. Firewall rules can also be published using the same mechanism. In certain embodiments, an NVD receives only those assignments that are relevant to that NVD. The data plane functions include functions for the actual routing/forwarding of a packet based on the configuration defined using the control plane. A VCN data plane is implemented by encapsulating customer network packets before they traverse the substrate network. The encapsulation/uncapsulation functionality is implemented in the NVDs. In certain embodiments, an NVD is configured to intercept all network packets in and out of host machines and perform network virtualization functions.
As mentioned above, an NVD performs multiple virtualization roles, including VNIC and VCN VR. An NVD can run VNICs associated with compute instances hosted on one or more host machines connected to the VNIC. For example, as shown in FIG.2, NVD210Perform functions on VNIC276associated with the compute instance268hosted by host machine202connected to NVD210. As another example NVD212run vnic280associated with the entire compute instance272hosted by host machine206, run by VNIC284associated with the compute instance274hosted by host machine208. A host machine can host compute instances belonging to different VCNs belonging to different clients, and the NVD connected to the host machine can run the VNICs (ie, perform VNIC-related functions) corresponding to the compute instances.
An NVD also runs virtual VCN routers that correspond to the VCNs of the compute instances. In the in FIG.2, NVD210Run VR VCN277corresponding to the VCN to which the compute entity belongs268belongs to NVD212Running one or more VR VCNs283corresponding to one or more VCNs for which instances hosted on host machines are to be calculated206mi208belong. In certain embodiments, the VCN VR corresponding to this VCN is executed by all NVDs connected to host machines hosting at least one compute instance belonging to this VCN. When a host machine hosts compute instances belonging to different VCNs, an NVD connected to that host machine can run VR VCNs corresponding to those different VCNs.
In addition to VNICs and VCN VR, an NVD can run various software (such as daemons) and contain one or more hardware components that facilitate the various network virtualization functions performed by the NVD. For convenience, these various components are grouped together as "packet processing components" shown in FIG.2. For example NVD210includes packet processing components286the NWD212includes packet processing components288. For example, the packet processing components for an NVD may include a packet processor configured to interface with the NVD's hardware ports and interfaces to monitor all packets received and transmitted by the NVD and to store network information . The network information may include, for example, network flow information identifying different network flows handled by the NVD and flow information (e.g., flow statistics). In certain embodiments, network flow information can be stored per VNIC. The packet processor can perform packet-by-packet manipulation, as well as implement dynamic NAT and L4 firewalling (FW). As another example, the packet processing components may include a replication agent configured to replicate the information stored by the NVD to one or more different replication target storages. As yet another example, the packet processing components may include a logger configured to perform logging functions for the NVD. The packet processing components may also include software to monitor the performance and health of the NVD and may also monitor the status and health of other components connected to the NVD.
FIGO.1Figure 1 shows the components of an example virtual or overlay network, including a VCN, subnets within the VCN, compute instances deployed in subnets, VNICs associated with compute instances, a VR for a VCN, and a set of gateways configured for the VCN. The ones shown in FIG.1may be of one or more of the types shown in FIG. 1 run or hosted on the hardware shown.2. For example, compute instances in a VCN may be executed or hosted by one or more host machines shown in FIG. 1 .2. For a compute instance that is hosted by a host computer, the VNIC associated with that compute instance is typically run by an NVD that is connected to that host computer (i.e. the VNIC functionality is provided by the NVD that is connected to that host computer). The VCN VR role for a VCN is performed by all NVDs connected to host machines that host or run the compute instances that are part of that VCN. The gateways associated with a VCN can be run by one or more different types of NVDs. For example, certain gateways may be run by the smartNIC, while others may be run by one or more host machines or other NVD implementations.
As described above, a compute instance in a customer VCN can communicate with many different endpoints, where the endpoints can be in the same subnet as the source compute instance, in a different subnet but within the same VCN as the source compute instance and the source compute instance or with an endpoint that is outside the VCN of the source compute instance. This communication is facilitated by the VNICs associated with the compute instances, the VCN VRs, and the gateways associated with the VCNs.
For communication between two compute instances on the same subnet within a VCN, communication is facilitated by the VNICs associated with the source and destination compute instances. The source and target compute instances can be hosted on the same host machine or on different host machines. A packet originating from a source compute instance can be forwarded from a host machine hosting the source compute instance to an NVD connected to that host machine. In NVD, the packet is processed by a packet processing pipeline, which may include running the VNIC associated with the source compute instance. Because the packet's destination endpoint is on the same subnet, running the VNIC associated with the source compute instance causes the packet to be forwarded to an NVD running the VNIC associated with the destination compute instance, which then processes it and forward the packet. the target compute instance. The VNICs associated with the source and target compute instances can be on the same NVD (e.g. when the source and target compute instances are hosted on the same host machine) or on different NVDs (e.g. when the source and target compute instances Compute instances are hosted on the same host computer). Source and destination are hosted on different host machines connected to different NVDs). VNICs can use routing/forwarding tables stored by NVD to determine the packet's next hop.
For a packet to communicate from a compute instance in one subnet to an endpoint in a different subnet in the same VCN, the packet originating from the source compute instance communicates from the host machine hosting the compute instance, from the source to the NVD connected to the host machine. With NVD, the packet is processed by a packet processing pipeline, which may involve running one or more VNICs and the VR associated with the VCN. For example, as part of the packet processing pipeline, the NVD executes or invokes the functionality corresponding to the VNIC (aka executing the VNIC) associated with the source compute instance. The functionality performed by the VNIC may include observing the VLAN tag in the packet. Because the packet's destination is outside the subnet, the NVD invokes and executes the VCN-VR functionality. The VCN VR then forwards the packet to the NVD running the VNIC associated with the target compute instance. The VNIC associated with the target compute instance processes the packet and forwards it to the target compute instance. The VNICs associated with the source and target compute instances can be on the same NVD (e.g. when the source and target compute instances are hosted on the same host machine) or on different NVDs (e.g. when the source and target compute instances Compute instances are hosted on the same host computer). Source and destination are hosted on different host machines connected to different NVDs).
If the destination of the packet is outside the VCN of the source compute instance, the packet originating from the source compute instance is communicated from the host machine hosting the source compute instance to the NVD connected to that host machine. NVD runs the VNIC associated with the source compute instance. Because the packet's destination endpoint is outside the VCN, the VCN's VR processes the packet for that VCN. The NVD invokes the VCN VR functionality, which can result in the packet being forwarded to an NVD running the appropriate gateway associated with the VCN. For example, if the destination is an endpoint within the customer's local network, the VR of the VCN can forward the packet to the NVD running the DRG gateway configured for the VCN. The VR VCN can run on the same NVD as the NVD running the VNIC associated with the source compute instance, or over another NVD. The gateway can be run from an NVD, which can be a smartNIC, a host computer, or another NVD implementation. The gateway then processes the packet and forwards it to the next hop, making it easier for the packet to communicate with the intended destination endpoint. In the in FIG.2, a package that comes from the Compute instance268can communicate from the host machine202for NVD210via the link220(with network card232). I am NVD210, VNIC276Called because it is the VNIC associated with the source compute instance268. VNIC276it is configured to examine the information encapsulated in the packet and determine a next hop to forward the packet to to facilitate the packet's communication to its final intended destination, and then forward the packet to the determined next hop forwards.
A compute instance deployed in a VCN can communicate with many different endpoints. These endpoints may include endpoints hosted by CSPI200Endpoints for CSPI200. Endpoints hosted by CSPI200It can contain instances in the same VCN or in other VCNs, which can be customer VCNs or non-customer VCNs.
Communication between endpoints hosted on CSPI200can be done over the physical network218. A compute instance can also communicate with endpoints that are not hosted on CSPI.200, or are outside of the CSPI200. Examples of such endpoints are endpoints within a customer network or data center, or public endpoints that can be accessed over a public network such as the Internet. Communication with endpoints outside of CSPI200it can be done over public networks (e.g. the Internet) (in FIG.2) or private networks (not shown in FIG.2) using different communication protocols.
The CSPI architecture200shown in fig.2it is an example only and is not intended to be limiting. Variations, alternatives, and modifications in alternative embodiments are possible. For example, in some implementations, CSPI200may have more or fewer systems or components than those shown in Fig.2, You can mix two or more systems, or you can have a different system configuration or arrangement. The ones shown in FIG.2it may be implemented in software (e.g. code, instructions, program) that is executed by one or more processing units (e.g. processors, cores) of the respective systems using hardware or combinations thereof. The software may be stored on a non-volatile storage medium (e.g. a storage device).
FIGO.4Figure 1 illustrates connectivity between a host machine and an NVD to provide I/O virtualization to support multi-tenancy in certain embodiments. As shown in fig.4, host machine402Run a hypervisor404which provides a virtualized environment. host machine402runs two VM instances, VM1 406Customer/tenant property no.1and virtual machine2 408Customer/tenant property no.2. host machine402consists of a physical network card410connected to an NVD412via the link414. Each of the compute instances is connected to a VNIC running NVD412. In the embodiment of Fig.4, virtual machine1 406connected to VNIC VM1 420and virtual machine2 408connected to VNIC VM2 422.
As in fig.4, NIC410consists of two logical NICs, the logical NIC A416and logical NIC B418. Each virtual machine is connected to its own logical NIC and configured to work with it. For example M.V1 406is connected to logical NIC A416and virtual machine2 408is connected to the logical NIC B418. Even if the host machine402includes only one physical NIC410shared by multiple tenants, each tenant virtual machine is assumed to have its own host machine and NIC due to logical NICs.
In certain embodiments, each logical NIC is assigned its own VLAN ID. Then the logical NIC A is assigned a specific VLAN ID416to tenant #1and the logical NIC B is assigned a separate VLAN ID418to tenant #2. When a packet is communicated from the VM1 406, a tag assigned to tenant #1the hypervisor adds it to the packet and the packet then communicates from the host machine402for NVD412via the link414. Similarly when a packet communicates from the VM2 408, a tag assigned to tenant #2the hypervisor adds it to the packet and the packet then communicates from the host machine402for NVD412via the link414. So a package424Host Machine Report402for NVD412has an associated label426One that identifies a specific tenant and associated virtual machine. In NVD for a package424received from the host computer402, the label426associated with the packet is used to determine whether the packet should be processed by the VNIC VM1 420or for VNIC VM2 422. The packet is then processed by the appropriate VNIC. The ones shown in FIG.4allows each tenant's compute instance to believe it has its own host computer and network card. The ones shown in FIG.4provides I/O virtualization to support multiple tenants.
FIGO.5represents a simplified block diagram of a physical network500according to certain modalities. The in fig.5it is structured as a Clos network. A Clos network is a specific type of network topology designed to provide link redundancy while maintaining high bisection bandwidth and maximum resource utilization. A clos network is a type of non-blocking multi-stage or multi-stage switching network where the number of stages or tiers can be two, three, four, five, etc. The in fig.5is a 3-layer network of layers1,2, mi3. gate switch504stands for animal0Switches in the Clos network. One or more NVDs are connected to TOR switches. Just-0Switches are also called physical network edge devices. Just-0The switches are connected to ground1Switches, also known as leaf switches. In the in fig.5, a set of "n" animal0TOR switches come with a set of "n" tier1switch and together they form a capsule. every level-0Switch in a pod is compatible with all tier1Power on the pod, but there is no switch connectivity between the pods. In certain implementations, two pods are referred to as a block. Each block is served or assigned a set of "n" animal2Switch (sometimes called column switch). There can be multiple blocks in the physical network topology. Just-2the switches in turn are connected to "n" ground.3Switches (sometimes referred to as superspike switches). Packet communication over the physical network.500It typically runs on one or more layer 3 communication protocols. Typically, all physical network layers except the TOR layer are n-way redundant, allowing for high availability. Policies can be specified on pods and blocks to control the visibility of switches to each other on the physical network to enable scaling of the physical network.
A characteristic of a Clos network is that the maximum number of hops allowed by a tier0switch to another level0Switch (or from an NVD configured with a Tier0-Switch to another NVD connected to an animal-0switch) is fixed. For example, in a three-tier Clos network, a packet takes a maximum of seven hops to get from one NVD to another NVD, with the source and destination NVDs connected to the leaf layer of the Clos network. Similarly, in a 4-tier Clos network, a maximum of nine hops is required for a packet to get from one NVD to another NVD, with the source and destination NVDs connected to the leaf layer of the Clos network. Therefore, a Clos network architecture maintains consistent latency across the network, which is important for communications within and between data centers. A Clos topology scales out and is inexpensive. Network bandwidth/throughput can easily be increased by adding more switches at different layers (e.g. more leaf and column switches) and increasing the number of links between switches at adjacent layers.
In certain embodiments, each resource within the CSPI is assigned a unique identifier called a cloud identifier (CID). This identifier is part of the resource information and can be used to manage the resource, for example via a console or via an API. An example syntax for a CID is:
ocid1.<RESOURCE TYPE>.<REALM>.[REGION][.FUTURE USE].<UNIQUE ID>
ocid1: The literal string specifying the CID version;
Resource Type: The resource type (eg, instance, volume, VCN, subnet, user, group, etc.);
domain: The domain where the resource resides. Example values are "c1" for commercial domain, "c2" for government cloud domain, or "c3" for federal cloud domain, etc. Each domain can have its own domain name;
region: The region where the resource is located. If the region is not applicable to the resource, this part can be left blank;
future use: Reserved for future use.
Unique ID: The unique part of the ID. The format may vary depending on the type of resource or service.
FIGO.6represents a block diagram of a cloud infrastructure600Incorporating a CLOS network arrangement according to certain embodiments. cloud infrastructure600includes a variety of shelves (for example, shelf1 610. . . estante m,620). Each frame contains multiple host machines (also referred to as hosts in this document). eg shelf1 610contains a variety of hosts (e.g. Host K machines) host1-AND,612host1-K,614, and rack M contains host machines K, i.e. H. Host M-A,622Money Host MK,624. It is understood that the depiction of FIG.6(ie, any rack with the same number of host machines, e.g., K host machines) is intended to be illustrative and not limiting. For example frame M,620You can have more or fewer host machines compared to the number of host machines in the rack1,610.
Each host machine contains multiple graphics processing units (GPUs). For example, as shown in FIG.6, host machine1-AND612includes N GPU, e.g. B.GPU1,613. Additionally, it should be understood that the depiction of FIG.6(each host machine having the same number of GPUs, ie N GPUs) is intended to be illustrative and not limiting, ie each host machine may include a different number of GPUs. Each rack contains a Top of Rack (TOR) switch that is communicatively coupled to the GPUs housed in the host machines. eg shelf1 610contains a TOR switch (i.e. TOR1)616which is communicatively coupled to the host1-AND,612and host1-K,614, it's M620contains a TOR switch (e.g. TOR M)626which is communicatively coupled to Host M-A,622and Host MK,624. It will be appreciated that the circuit shown in FIG.6(z.B. TOR1 616, and TOR M626), each containing N ports used to communicatively couple the TOR switch to the N GPUs housed on each racked host machine. TOR coupling changes for GPUs as shown in Fig. 2.6It is intended to be illustrative and not limiting. For example, in some embodiments, the TOR switch may have multiple ports, each corresponding to a GPU on each host machine, that is, a GPU on a host machine may be connected to a unique TOR port via a connection. Data traffic received from a network device (e.g. TOR1 616) is characterized here as traffic received on a given network device gateway connection. For example, if the GPU1 613of the host1-AND612transmits a data packet to TOR1 616(using the link617), the data packet is received at the port619again1In return. HILL1 616characterizes this data packet as information received on a first TOR gateway connection. It is understood that a similar term can be applied to any outbound TOR port links.
The GATE switches in each rack are communicative with multiple column switches, e.g. B. column switches coupled1,630and the column switch P640. As in fig.6, TdR1,616is connected to the steering column switch1 630via two connections and to the P column switch640each via two further links. Information transmitted from a particular GATE switch to a column switch is referred to herein as an uplink routed communication, while information transmitted from a column switch to a GATE switch is referred to herein as a downlink routed communication. According to some embodiments, the GATE switches and the column switches are connected in a CLOS network arrangement (e.g., a multi-stage switching network), with each GATE switch forming a "leaf" node in the CLOS network.
According to some embodiments, the GPUs included in the host machines perform machine learning related tasks. In such a configuration, a single task can be run/distributed across a large number of GPUs (e.g. 64), which in turn can be distributed across multiple host machines and multiple racks. Since all of these GPUs are working on the same task (i.e. a workload), they all need to communicate with each other synchronously. Furthermore, at any point in time, the GPUs are either in compute mode or in communication mode, meaning the GPUs are communicating with each other at roughly the same time. The workload speed is determined by the slowest GPU speed.
Typically to forward packets from a source GPU (e.g. GPU1,613of the host1-AND612) to a target GPU (e.g. GPU1,623Host-MA622), ECMP (Equal Cost Multipath) routing used. In ECMP routing, when multiple paths of equal cost are available to route traffic from a sender to a receiver, a selection technique is used to choose a specific path. Consequently, on a network device (e.g. a TOR switch or a backbone switch) that receives the traffic, a selection algorithm is used to choose an egress connection that will be used to receive the traffic from the network device in the near future forward. . This outbound connection selection occurs on each network device on the way from the sender to the receiver. A hash-based selection algorithm is a widely used ECMP selection technique where the hash can be based on a 4-tuple of packets (e.g. source port, destination port, source IP, destination IP).
ECMP routing is a flow-aware routing technique in which each flow (i.e., a stream of data packets) is encoded along a specific path as it flows. Therefore, the packets in a flow are forwarded by a network device using a specific egress port/link. This is usually done to ensure that the packets arrive in a flow in order, ie no packet reordering is required. However, ECMP routing does not take bandwidth (or throughput) into account. In other words, TOR and Spin switches perform ECMP load balancing with statistical knowledge about the flow (unknown performance) of flows on parallel links.
A problem with standard ECMP routing (i.e. only flow-aware routing) is that flows received by a network device on two separate inbound links can end up on the same outbound link, resulting in a flow collision. For example, imagine a situation where two flows arrive on two separate inbound 100G links and each of the flows are encoded on the same outbound 100G link and result in dropped packets since the input bandwidth width is 200G but the output bandwidth is 100 G. For example FIG.7described below, illustrates an example flow collision scenario700.
As in fig.7, there are two streams: stream1 710addressed by the first host GPU of a host machine1-AND,612for TOR switch616(represented by the solid line) and flow2 720running from another GPU on the same host machine612for TOR switch616(represented by the dashed line). Please note that both transmissions are directed to the TOR key616on separate links, i. H. separate TOR gateway links. It is assumed that all of FIG.7have a capacity (ie bandwidth) of 100G. If the TOR switch616running the ECMP routing algorithm, it is possible that both streams will be hashed to use the same TOR egress port link, e.g. B. TOR port connected to the link730, which connects the TOR switch616for steering column switch630. In this case, a collision occurs between the two flows (represented by the 'X' character), resulting in packet loss.
This collision scenario is generally problematic for all types of traffic, regardless of protocol. For example, TCP is smart because if a packet is dropped and the sender doesn't get an acknowledgment for that dropped packet, the packet is retransmitted. However, the situation is worse for Remote Direct Memory Access (RDMA) traffic. RDMA networks don't use TCP for various reasons (e.g. TCP has complex logic that doesn't lend itself well to low latency and high throughput). RDMA networks use protocols such as RDMA over Infiniband or RDMA over Converged Ethernet (RoCE). In RoCE there is a congestion control algorithm in which the sender slows down the packet transmission when it detects the occurrence of congestion or lost packets. A dropped packet not only retransmits the dropped packets, but also multiple packets around the dropped packet, further consuming the available bandwidth and resulting in poor performance.
Techniques for overcoming the flow collision problem described above are described below. The stream collision issue affects CPU and GPU traffic. However, the stream collision problem is a much bigger problem for GPUs due to the strict time synchronization requirements. Additionally, it should be noted that the standard ECMP routing mechanism, due to its inherent property of routing information in a statistical, bandwidth-independent manner, regardless of whether the network has subscriptions or not, leads to flow collision scenarios. A non-overbooked network is one where the bandwidth of incoming connections to a device (e.g. TOR, column switch) is equal to the bandwidth of outgoing connections. Note that if all links have the same bandwidth capacity, the number of inbound links is equal to the number of outbound links.
According to some embodiments, techniques for overcoming the flow collision problem mentioned above include a GPU-based policy routing engine (also referred to herein as a GPU-based traffic engineering engine) and an ECMP routing engine. Each of these techniques is described in more detail below.
FIGO.8represents a policy-based routing mechanism implemented in FIG cloud infrastructure network devices.6, according to certain modalities. In particular, the cloud infrastructure800includes a variety of frames, such as frame1 810for grid M820. Each rack contains a host machine that contains multiple GPUs. eg shelf1 810contains a host machine, i. H. host1-AND812and Regal M820contains a host machine, i. H. Host M-A822. Each rack contains a TOR switch, eg Rack1 810includes TOR1Trocar814and Regal M820including TOR M key824. The host machine in each rack is communicatively coupled to the respective TOR switch in the rack. TOR switch, i.e. TOR switch1,814e chave TOR M824are in turn communicatively coupled to the column switches, d. H. column switches830and column switch840. Describes policy-based routing and illustrates cloud infrastructure800it is represented as containing a single host machine per rack. However, it is understood that each rack in the infrastructure can have more than one host machine.
According to some embodiments, data packets are routed from a sender to a receiver in the hop-by-hop network. A routing policy is configured on each network device that connects an ingress port link to an egress port link. The network device can be a TOR switch or the column switch. Referring to Fig.8, two flows are shown: Flow1y GPU1on the host machine812whose intended target is GPU1on the host machine822and flow2by GPU N on the host machine812whose intended target is GPU N on the host machine822. network devices, e.g. B. GATE1 814, spine1 830TOR M824, and column P840They are configured to connect (or combine) an input port connection with an output port connection. Inbound port bindings are mapped to outbound port bindings (for example, in a policy table) on each network device.
Referring to Fig.8, this is to be seen in relation to the flow1(i.e. current shown by solid lines) which, when TOR1 814you will get a package from the link850, TOR is configured to forward the received packet on the outgoing link855. Likewise with the column switch830Get the package from the link855, it is configured to forward the packet on the outbound link860. When finally TOR M824Get the package on the link860, it is configured to forward the packet on the outbound link865to deliver the packet to its intended destination, i.e. H. GPU to forward1on the host machine822. As for the flow2(i.e. flow shown by dashed lines) when TOR1 814you will get a package from the link870, TdR1is configured to forward the received packet on the outgoing connection875for spine P840. When the column switch840Get the package from the link875, it is configured to forward the packet on the outbound link880. When finally TOR M824Get the package on the link880, it is configured to forward the packet on the outbound link885to forward the packet to its intended destination, i.e. H. GPU N on the host machine822.
In this way, according to the GPU policy-based routing mechanism, each network device is configured to bind an incoming port/link to an outgoing port/link to avoid collisions. For example, consider the hop flows of the first stream1and flow2via TOR1 814receives a first data packet (according to the flow1) No connection850, and receives a second data packet (according to the flow2) No connection870. like TOR1 814, it is configured to forward incoming data packets on the incoming link/port850for link/egress port855and forward incoming data packets on the gateway link/port870for link/egress port875it is guaranteed that the first and the second data packet will not collide. It is understood that in each network device within the cloud infrastructure there is a 1-1 correspondence between an input port link and an output port link, that is, there is a mapping between the input port link and an output port link and a output port connection. independent of the flows and/or the protocols executed by the flows. In addition, in the event that an outbound link of a given network device fails, according to some embodiments, the network device is configured to change its routing policy from GPU policy-based routing to standard ECMP routing to obtain a new link . outside). multiple available output links) and send the stream to this new output link. Note that in this case, flow collision may occur, causing congestion.
Let us now turn to the fig.9a block diagram of a cloud infrastructure is shown900Illustration of different types of connections according to specific modalities. the infrastructure900includes a variety of frames, such as frame1 910, Is D920, and shelf M,930. Backstage910mi930include a variety of host machines. eg shelf1 910contains a variety of hosts (e.g. K host machines) host1-AND,912host1-K,914, and rack M contains host machines K, i.e. H. Host M-A,932Money Host MK,934. Estante D920includes one or more host computers922, each of which contains several CPUs, i.e. H. the host machine922it is a host machine with no GPU. each of the shelves910,920, mi930contain a TOR switch, i.e. TOR1,916, Tor D926, and TOR M936, which are communicatively coupled to the host machines in the respective racks. Also TOR switch916,926, mi936are communicatively coupled with several column switches, i. H. column switches940mi950.
As in fig.9, an initial connection (i.e. connection1represented by dashed lines) exists from a GPU host (i.e. host1-AND912) to another GPU host (e.g. Host M-A932) and a second connection (i.e. connection2represented by dotted lines) exists from a GPU host (i.e. host1-K914) to a non-GPU host (i.e. host D922). Speaking of connection1the data packets associated with the flow are routed hop-by-hop, with each intervening network device being configured based on the GPU-based policy routing mechanism as described in Figure 3 above.8. Specifically connection data packets1they are guided along the discontinuous links shown in FIG.9. that is, from the host1-A for TOR1, from TOR1for steering column switch1, column switch1to TOR M and finally from TOR M to Host M-A. Each of the network devices is configured to bind an incoming gateway to an outgoing gateway on the network device.
On the contrary, the connection2comes from a GPU-based host (e.g. host1-K914) to a non-GPU based host (i.e. host D922). In this case TOR1 916, it is configured to bind an incoming link port, i.e. H. port and link(971) in which it receives data packets from a GPU-based host to an egress link port, i.e. H. an egress port and a link972connected to the output port. So TO1forwards data packets to the column switch1 940use outbound link972. Since the data packets are destined for a non-GPU based host, the column switch1 940does not use a policy-based routing mechanism to forward packets to TOR D. Instead column change1 940uses the ECMP routing mechanism to select one of the available links980to forward the packets to TOR D, which then forwards the data packets to Host D922.
In some embodiments, the above with reference to FIG.7it is bypassed by network devices in the cloud infrastructures described here by implementing a modified version of ECMP routing. In this case, the ECMP hash algorithm is modified to encrypt traffic from a specific port link on a network device (and routed to the same port link on the network device). In particular, each network device implements a modified ECMP algorithm to determine an egress port connection to be used for forwarding the packet, so according to the modified ECMP algorithm each packet received via a specific ingress port connection , always produces a hash on the same output port. Shortcut. For example, consider the case where a first packet and a second packet are received on a first port connection of a network device, while a third packet and a fourth packet are received on a second port connection of the network device. In such a situation, the network device is configured to implement the modified ECMP algorithm, wherein the network device transmits the first packet and the second packet on a first output port link of the network device, and transmits the third packet and the fourth packet on an output port link of the second network device, wherein the input port link of the first network device differs from the input port link of the second network device and the output port link of the first network device differs from the link output of the second port of the network device. In certain implementations, information stored in a forwarding information database (e.g., forwarding tables, ECMP tables, etc.) may be modified to enable the functionality noted above.
Let us now turn to the fig.10an example rack configuration is shown1000, according to certain modalities. As in fig.10Ach Regal1000contains two host machines, i. H. a host machine1010and host machine1020. It is estimated that although the rack1000Shown is the frame including two host machines1000It can contain a larger number of host computers. Each host machine contains multiple GPUs and multiple CPUs. For example, the host computer1010includes a variety of CPUs1012and a variety of GPUs1014, while the host machine1020includes a variety of CPUs1022and a variety of GPUs1024.
The host machines are communicatively connected to different network structures via different TOR switches. For example host machines1010mi1020are communicatively coupled to a mesh network (referred to here as a rack front-end network)1000) via a TOR switch, i.e. TOR1Trocar1050. The front-end network can correspond to an external network. More specifically, the host machine1010is connected to the front-end network through a network interface card (NIC).1030and a network virtualization device (NVD)1035, which is coupled to TOR1Trocar1050. the host machine1020is connected to the front-end network via a NIC1040the NVD1045, which is coupled to TOR1Trocar1050. Therefore, in some embodiments, each host machine's CPUs can communicate with the front-end network via the NIC switch, NVD, and TOR. For example CPU1012from the host machine1010can communicate with the front-end network via NIC1030, NVD1035and ToR1Trocar1050.
host machines1010mi1020You are connected to a backend network, on the other side Quality of Service (QoS) is enabled. The (QoS) enabled back-end network is referred to herein as a back-end network corresponding to a GPU cluster network as shown in FIG.6. host machine1010connected via a different network card1065for a TOR2Trocar1060which communicatively couples the host machine1010to the background network. Also the host machine1020is connected via NIC1080for TOR2Trocar1060, which is communicatively coupled to the host machine1020to the background network. Therefore, the plurality of GPUs of each host machine can communicate with the back-end network through a NIC and a TOR switch (used by the GPUs) to communicate with the front-end and back-end networks, respectively to communicate.
FIGO.11A represents an example flowchart110012 illustrate the steps performed by a network device in routing a packet, according to certain embodiments. The ones shown in FIG.11A may be implemented in software (e.g. code, instructions, program) executed by one or more processing units (e.g. processors, cores) of respective systems, hardware, or combinations thereof. The software may be stored on a non-volatile storage medium (e.g. a storage device). The one shown in FIG.11Those described below are intended to be illustrative and not limiting. Although FIG.11A represents the various processing steps that occur in a particular order or sequence and is not intended to be limiting. In certain alternative embodiments, the steps may be performed in a different order, or some steps may also be performed in parallel.
The process starts at step1105wherein a network device receives a data packet transmitted by a graphics processing unit (GPU) of a host machine. by the way1110the network device determines an incoming port/link on which the packet was received. by the way1115, the network device identifies an outgoing port/link that corresponds to the incoming port/link (on which the packet was received) based on the policy routing information. According to some embodiments, the policy routing information corresponds to a pre-configured GPU routing table for the network device that connects each network device's input port connection to an individual device's output port connection.
Then continue with step1120, where a query is run to see if the outbound port binding is working, e.g. B. whether the outgoing binding is active. If the query is answered in the affirmative (i.e. the link is active), the process continues with step1125otherwise, if the query response is negative (i.e. the connection is in a failed/broken state), the process continues to step1130. by the way1125the network device uses the outbound port link (identified in step1115) to forward the received data packet to another network device. by the way1130the network device obtains flow information from the data packet, for example the flow information may correspond to a 4-tuple associated with the packet (i.e. source port, destination port, source IP address, source IP, destination). Based on the received flow information, the network device uses ECMP routing to identify a new outbound port connection, i.e. an available outbound port connection. Then continue with step1135wherein the network device uses the newly obtained output port connection to forward the data packet received in step1105.
FIGO.11B represents another example flowchart115012 illustrate the steps performed by a network device in routing a packet, according to certain embodiments. The ones shown in FIG.11B can be implemented in software (e.g. code, instructions, program) executed by one or more processing units (e.g. processors, cores) of respective systems, hardware, or combinations thereof. The software may be stored on a non-volatile storage medium (e.g. a storage device). The one shown in FIG.11B and described below are intended to be illustrative and not limiting. Although FIG.11B depicts the various processing steps occurring in a particular order or order, this is not intended to be limiting. In certain alternative embodiments, the steps may be performed in a different order, or some steps may also be performed in parallel.
The process starts at step1155, where a network device receives a packet of data transmitted by a graphics processing unit (GPU) of a host machine. by the way1160the network device determines the flow information of the received packet. In some implementations, the flow information can correspond to 4 tuples associated with the packet (ie, source port, destination port, source IP address, destination IP address). by the way1165the network device calculates an egress port binding by implementing a modified version of ECMP routing. According to the modified ECMP algorithm, each packet received on a specific input port connection is always encrypted for transmission on the same output port connection.
Then continue with step1170, where a query is performed to determine whether the output port binding (determined in step1165) is in a working state, for example, the outgoing connection is up. If the query is answered in the affirmative (i.e. the link is active), the process continues with step1175otherwise, if the query response is negative (i.e. the connection is in a failed/broken state), the process continues to step1180. by the way1175the network device uses the outbound port link (identified in step1165) to forward the received data packet to another network device. If the identified egress port link (from step1165) is determined to be in an idle state, so processing proceeds to step1180. by the way1180, the network device implements ECMP routing (that is, standard ECMP routing) to identify a new outbound port connection. Then continue with step1185wherein the network device uses the recalculated output port connection to forward the data packet received in step1155.
Note that the techniques described above for forwarding data packets from a host machine's GPU result in a 20% performance increase for smaller clusters and a 70% performance increase for larger clusters (i.e. a 3x improvement compared to the standard ECMP routing algorithm).
As mentioned above, Infrastructure as a Service (IaaS) is a special type of cloud computing. IaaS can be configured to deliver virtualized computing resources over a public network (such as the Internet). In an IaaS model, a cloud computing provider may host infrastructure components (e.g., servers, storage devices, network nodes (e.g., hardware), implementation software, platform virtualization (e.g., a tiered hypervisor), or similar). In some cases, an IaaS provider can also provide a variety of services to accompany these infrastructure components (eg, billing, monitoring, logging, security, load balancing and pooling, etc.). Just as these services can be policy driven, IaaS users can implement direct load balancing policies to maintain application availability and performance.
In some cases, IaaS customers can access resources and services over a wide area network (WAN) such as the Internet, and use the services of cloud providers to install the remaining elements of an application stack. For example, the user can login to the IaaS platform to create virtual machines (VMs), install operating systems (OS) on each VM, implement middleware like databases, create storage buckets for workloads and backups, and even enterprise software on it to install . Virtual machine customers can use the provider's services to perform various functions including network traffic balancing, application troubleshooting, performance monitoring, disaster recovery management, etc.
In most cases, a cloud computing model requires the involvement of a cloud provider. The cloud provider can, but does not have to, be a third-party service that specializes in the provision (e.g. offering, renting, selling) of IaaS. A company can also choose to implement a private cloud and become its own infrastructure service provider.
In some examples, IaaS delivery is the process of phased placement of a new application or a new version of an application onto an application server or the like. It can also include the process of preparing the server (e.g. installing libraries, daemons, etc.). This is typically managed by the cloud provider below the hypervisor layer (e.g. servers, storage, network hardware and virtualization). Therefore, the customer may be responsible for managing the application (OS), middleware and/or implementation (e.g. in virtual self-service machines (e.g. which can be activated on demand) or similar).
In some examples, IaaS provisioning may refer to purchasing computers or virtual hosts for use and even installing the necessary libraries or services on them. In most cases, provisioning does not include provisioning, and provisioning may be required first.
In some cases, there are two distinct challenges in IaaS deployment. First, there is the initial challenge of provisioning the initial infrastructure before everything is up and running. Second, there is the challenge of evolving the existing infrastructure (e.g. adding new services, changing services, removing services, etc.) once everything is deployed. In some cases, both challenges can be solved by being able to define the infrastructure configuration declaratively. In other words, the infrastructure (like what components are needed and how they interact) can be defined by one or more configuration files. Therefore, the overall topology of the infrastructure (e.g. which functions depend on which and how they work together) can be described declaratively. In some cases, once the topology is defined, a workflow can be generated that creates and/or manages the various components described in the configuration files.
In some examples, an infrastructure may have many interconnected elements. For example, there may be one or more Virtual Private Clouds (VPCs) (e.g., a potentially on-demand pool of configurable and/or shared computing resources), also known as the core network. In some examples, one or more security group rules may also be provided to define how network security is configured for one or more virtual machines (VMs). Other infrastructure elements such as load balancers, databases or similar can also be provided. As more infrastructure elements are desired and/or added, the infrastructure may gradually evolve.
In some cases, continuous delivery techniques may be employed to allow infrastructure code to be deployed across multiple virtual computing environments. In addition, the techniques described may enable infrastructure management in these environments. In some cases, service teams may write code that they intend to deploy to one or more, but often many, different production environments (e.g., to several different geographic locations, sometimes around the world). However, in some examples, the infrastructure on which the code will be deployed must first be set up. In some cases, deployment can be done manually, a deployment tool can be used to deploy resources, and/or deployment tools can be used to deploy code once the infrastructure is deployed.
FIGO.12is a block diagram1200Illustrating an example pattern of an IaaS architecture in accordance with at least one embodiment. service operator1202can be communicatively coupled with a secure host lease1204which may include a virtual cloud network (VCN).1206and a secure host subnet1208. In some examples, service operator1202You may use one or more customer computing devices that are handheld devices (e.g., an iPhone®, cell phone, iPad®, tablet computer, personal digital assistant (PDA)) or portable devices (e.g., .Google Glass) ® Head Mounted Display) running software such as Microsoft Windows Mobile® and/or various mobile operating systems such as iOS, Windows Phone, Android, BlackBerry 8, Palm OS and the like, and Internet, email, Short messages ( SMS ), Blackberry® or another enabled communication protocol. Alternatively, Customer's computing devices may be general purpose personal computers, including, for example, personal computers and/or laptop computers running various versions of Microsoft Windows®, Apple Macintosh® and/or Linux operating systems. Customer's computing devices may be workstations running a variety of commercially available UNIX® or UNIX-like operating systems, including but not limited to the GNU/Linux variant of operating systems, such as Google Chrome OS. Alternatively or additionally, the client computing devices may be any other electronic device, such as a thin client computer, an internet-enabled gaming system (e.g., a Microsoft Xbox gaming console with or without Kinect® gestures), and/or a personal messaging device that can communicate over a network that can access the VCN1206i Internet.
A VCN1206may contain a Local Peering Gateway (LPG).1210that can communicatively connect to a Secure Shell (SSH) VCN1212by a liquid gas1210Content in VCN SSH1212. An SSH VCN1212may contain an SSH subnet1214, such as B. an SSH VCN1212can be communicatively connected to a VCN control plane1216about LPG1210included in the control plane VCN1216. Also the SSH VCN1212can be communicatively coupled to a VCN data plane1218by a liquid gas1210. The VCN control plane1216and VCN data plan1218may be included in a service lease1219owned and/or operated by the IaaS provider.
The VCN control plane1216may contain a demilitarized zone (DMZ) control plane layer1220that acts as a perimeter network (e.g., portions of a corporate network between the corporate intranet and external networks). DMZ-based servers can have limited liability and help contain security breaches. Also the DMZ layer1220can contain one or more Load Balancer (LB) subnets1222, a control plane application layer1224which may contain application subnets1226, a control plane data layer1228which may contain database (DB) subnets1230(e.g., front-end database subnets and/or back-end database subnets). LB subnets1222contained in the DMZ layer of the control plane1220can be communicatively connected to the application subnet(s).1226contained in the application layer of the control plane1224and a web portal1234which may be included in the control plane VCN1216and application subnets1226can be communicatively connected to the database subnet(s).1230contained in the data layer of the control plane1228and a service gateway1236und ein Network Address Translation (NAT)-Gateway1238. The VCN control plane1216may include service gateway1236e- or NAT-Gateway1238.
The VCN control plane1216may include a mirror application layer of the data plane1240which may contain application subnets1226. The application subnets1226included in the application layer of the data plane mirror1240may contain a Virtual Network Interface Controller (VNIC).1242that a compute instance can run1244. The compute instance1244can communicatively connect to the application subnet(s).1226from the application layer of the data plane mirror1240for application subnets1226which may be contained in a data plane application layer1246.
The VCN data plan1218may include a data plane application layer1246, a DMZ layer of the data plane1248, and a data layer of the data plane1250. The DMZ layer of the data plane1248may contain LB subnets1222which can be communicatively coupled to the application subnet(s).1226from the application layer to the data layer1246and the web portal1234VCN data plan1218. The application subnets1226communicatively coupled to the service gateway1236VCN data plan1218e- or NAT-Gateway1238VCN data plan1218. The data layer of the data plane.1250can also contain the subnets of the database1230which can be communicatively coupled to the application subnet(s).1226from the application layer to the data layer1246.
the internet portal1234from the VCN control plane1216and VCN data plan1218can be communicatively coupled with a metadata management service1252which can be communicatively linked to the public Internet1254. public internet1254can be communicatively connected to the NAT gateway1238from the VCN control plane1216and VCN data plan1218. the service gateway1236from the VCN control plane1216and VCN data plan1218can be communicatively coupled with cloud services1256.
In some examples, the service gateway1236from the VCN control plane1216the VCN data plan1218can send API (Application Programming Interface) calls to cloud services1256without going over the public internet1254. API calls to cloud services1256from the service gateway1236can be a one-way street: the service gateway1236can make API calls to cloud services1256and cloud services1256can send the requested data to the service gateway1236. But cloud services1256cannot initiate API calls to the service gateway1236.
In some examples, the secure location of the host1204can be connected directly to the service lease1219, which can be isolated in other ways. The secure host subnet1208can communicate with the SSH subnet1214by a liquid gas1210which can enable bi-directional communication in an isolated system. Connect to the secure host subnet1208to the SSH subnet1214can provide a secure host subnet1208Access to other entities within the service lease1219.
The VCN control plane1216may allow users of the rental service1219to configure or provide desired functions. Desired resources provisioned in the VCN control plane1216can be deployed in the data plane VCN or otherwise used1218. In some examples, the VCN control plane1216can be isolated from the data plan of the VCN1218, and the application layer of the data plane mirror1240from the VCN control plane1216can communicate with the application layer of the data plane1246VCN data plan1218via VNIC1242which may be included in the application layer of the data plane mirror1240and the application layer of the data plane1246.
In some cases, system users or clients can make requests over the public internet, such as B. Create, read, update, or delete (CRUD) operations.1254that can submit requests to the Metadata Management Service1252. The metadata management service1252can forward the request to the control plane VCN1216via the web portal1234. The request can be received by the LB subnets1222contained in the DMZ layer of the control plane1220. As LB subnets1222can determine that the request is valid and, in response to this determination, the LB subnets1222can send the request to the subnet(s) of the application1226contained in the application layer of the control plane1224. If the request is validated and requires a public internet call1254, the call to the public Internet1254can be passed to the NAT gateway1238who can make the call to the public internet1254. The storage to be stored by the query can be stored in the subnet(s) of the database1230.
In some examples, the application layer of the data plane mirror1240can facilitate direct communication between the control plane VCN1216and VCN data plan1218. For example, you might want to apply configuration changes, updates, or other appropriate modifications to the resources contained in the data plane VCN.1218. About and VNIC1242, the VCN control plane1216can communicate directly with the resources contained in the VCN data plane and therefore make appropriate changes, updates or other modifications to the configuration of the resources1218.
In some embodiments, the VCN control plane1216and VCN data plan1218may be included in the service lease1219. In this case, the system user or client cannot own or operate the control plane VCN.1216the VCN data plan1218. Instead, the IaaS provider can own or operate the control plane VCN.1216and VCN data plan1218that may be included in the service lease1219. This mode can provide network isolation that can prevent users or clients from interacting with resources owned by other users or clients. Also, this modality can allow users or clients of the system to store databases privately without relying on the public internet.1254, which may not have the desired level of security, for storage.
In other embodiments, the LB subnet(s)1222included in the control plane VCN1216can be configured to receive a signal from the service gateway1236. In this mode, the VCN is control plane1216and VCN data plan1218can be configured to be called from an IaaS provider client without calling the public internet1254. Customers of the IaaS provider may want this approach because the databases used by the customers can be controlled by the IaaS provider and stored in the service lease.1219, which can be isolated from the public Internet1254.
FIGO.13is a block diagram1300Illustrating another exemplary pattern of an IaaS architecture in accordance with at least one embodiment. service operator1302(e.g. service operator1202from fig.12) can be communicatively connected to a secure host location1304(e.g. secure accommodation rental1204from fig.12) that can contain a Virtual Cloud Network (VCN).1306(e.g. the VCN1206from fig.12) and a secure host subnet1308(e.g. the subnet of the secure host1208from fig.12). A VCN1306may contain a Local Peering Gateway (LPG).1310(e.g. LPG1210from fig.12) that can be communicatively connected to a Secure Shell (SSH) VCN1312(e.g. the SSH VCN1212from fig.12) through a GPL1310Content in VCN SSH1312. An SSH VCN1312may contain an SSH subnet1314(e.g. the SSH subnet1214from fig.12) and an SSH VCN1312can be communicatively connected to a VCN control plane1316(e.g. the control plane VCN1216from fig.12) through a GPL1310included in the control plane VCN1316. The VCN control plane1316may be included in a service lease1319(for example hiring the service1219from fig.12) and VCN data plan1318(e.g. VCN's data plan1218from fig.12) can be included in a client lease1321owned or operated by users or customers of the system.
The VCN control plane1316may include a control plane DMZ layer1320(e.g. the DMZ layer of the control plane1220from fig.12) that can contain LB subnets1322(z. B. LB-Subnetze1222from fig.12), a control plane application layer1324(e.g. the application layer of the control plane1224from fig.12) that can contain application subnets1326(e.g. application subnets1226from fig.12), a control plane data layer1328(e.g. the data layer of the control plane1228from fig.12) that can contain database (DB) subnets1330(e.g. similar to the database subnets)1230from fig.12). As LB subnets1322contained in the DMZ layer of the control plane1320can be communicatively connected to the application subnet(s).1326contained in the application layer of the control plane1324and a web portal1334(z. B. the Internet-Gateway1234from fig.12) that may be included in the VCN control plane1316and application subnets1326can be communicatively connected to the database subnet(s).1330contained in the data layer of the control plane1328and a service gateway1336(e.g., the service gateway of FIG.12) und ein NAT-Gateway (Network Address Translation).1338(e.g. the NAT gateway1238from fig.12). The VCN control plane1316may include service gateway1336e- or NAT-Gateway1338.
The VCN control plane1316may include a mirror application layer of the data plane1340(z. B. Data Plane Mirror Application Layer1240from fig.12) that can contain application subnets1326. The application subnets1326included in the application layer of the data plane mirror1340may contain a Virtual Network Interface Controller (VNIC).1342(for example the VNIC of1242) that can run a compute instance1344(e.g. similar to compute instance1244from fig.12). The compute instance1344can facilitate communication between application subnets1326from the application layer of the data plane mirror1340and application subnets1326which may be contained in a data plane application layer1346(e.g. the application layer of the data plane1246from fig.12) over the VNIC1342included in the application layer of the data plane mirror1340out of VNIC1342contained in the application layer of the data plane1346.
the internet portal1334included in the control plane VCN1316can be communicatively coupled with a metadata management service1352(e.g. the metadata management service1252from fig.12), which can be communicatively linked to the public Internet1354(e.g. public internet1254from fig.12). public internet1354can be communicatively connected to the NAT gateway1338included in the control plane VCN1316. the service gateway1336included in the control plane VCN1316can be communicatively coupled with cloud services1356(z. B. Cloud-Dienste1256from fig.12).
In some examples, the VCN data plane1318may be included in the client's rental agreement1321. In this case, the IaaS provider can provide the control plane VCN1316for each customer, and the IaaS provider can configure a single compute instance for each customer1344included in the service lease agreement1319. Any compute instance1344may enable communication between the control plane VCN1316included in the service lease1319and VCN data plan1318included in the customer's rental agreement1321. The compute instance1344can allow resources provisioned in the control plane VCN1316included in the service lease agreement1319, deployed, or otherwise used in the data plane VCN1318included in the customer's rental agreement1321.
In other examples, the IaaS provider's customer may have databases located at the customer's premises.1321. In this example the control plane VCN1316may include a data plane mirror application layer1340which may contain application subnets1326. The mirror application layer of the data plane1340may reside in the VCN data plan1318, but the application layer mirrors the data layer1340You may not be living on the VCN's data plan1318. That is, the application layer of the data plane mirror1340can access the customer location1321, but the application layer mirrors the data layer1340may not exist in the VCN data plane1318or owned or operated by the IaaS provider's customer. The mirror application layer of the data plane1340can be configured to make calls to the VCN data plan1318but cannot be configured to make calls to entities contained in the VCN control plane1316. The customer may want to implement or use functionality in the VCN data plane1318deployed in the control plane VCN1316, and the application layer of the data plane mirror1340may facilitate desired implementation or other use of customer resources.
In some embodiments, the IaaS provider's customer may apply filters to the data plan VCN1318. In this mode, the customer can determine which VCN of the data plan is1318can access the public Internet and the customer can restrict access to it1354VCN data plan1318. The IaaS provider may not be able to apply filters or control access to the data plane VCN1318to an external network or database. Application of filters and controls by the client in the VCN data plane1318included in the customer's rental agreement1321, it can help isolate the VCN from the data plane1318from other customers and from the public Internet1354.
In some embodiments, cloud services1356can be called from the service gateway1336to access services that may not be available on the public internet1354, in the VCN control plane1316, the VCN data plan1318. The connection between cloud services1356and the control plane VCN1316the VCN data plan1318it may not be vivid or continuous. cloud services1356it may be on another network owned or operated by the IaaS provider. cloud services1356can be configured to receive service gateway calls1336and can be configured not to receive public internet calls1354. Some cloud services1356can be isolated from other cloud services1356, and the control plane VCN1316can be isolated from cloud services1356which may not be in the same region as the control plane VCN1316. For example, the control plane VCN1316may be located in "Region 1" and the cloud service "Provision12", can be located in Region 1 and "Region 2". When a call is displayed12This is done by the service gateway1336included in the control plane VCN1316is in Region 1, the call can be routed to the deployment12in Region 1. In this example, the control plane VCN1316, or unfold12in Region 1, cannot communicatively dock or communicate with the display12in area 2.
FIGO.14is a block diagram1400Illustrating another exemplary pattern of an IaaS architecture in accordance with at least one embodiment. service operator1402(e.g. service operator1202from fig.12) can be communicatively connected to a secure host location1404(e.g. secure accommodation rental1204from fig.12) that can contain a Virtual Cloud Network (VCN).1406(e.g. the VCN1206from fig.12) and a secure host subnet1408(e.g. the subnet of the secure host1208from fig.12). A VCN1406may contain a liquefied gas1410(e.g. LPG1210from fig.12) that can be communicatively connected to an SSH VCN1412(e.g. the SSH VCN1212from fig.12) through a GPL1410Content in VCN SSH1412. An SSH VCN1412may contain an SSH subnet1414(e.g. the SSH subnet1214from fig.12) and an SSH VCN1412can be communicatively connected to a VCN control plane1416(e.g. the control plane VCN1216from fig.12) through a GPL1410included in the control plane VCN1416and for a VCN data plan1418(for example the data plan1218from fig.12) through a GPL1410Content in the VCN data plan1418. The VCN control plane1416and VCN data plan1418may be included in a service lease1419(for example hiring the service1219from fig.12).
The VCN control plane1416may include a control plane DMZ layer1420(e.g. the DMZ layer of the control plane1220from fig.12) that can contain Load Balancer (LB) subnets1422(z. B. LB-Subnetze1222from fig.12), a control plane application layer1424(e.g. the application layer of the control plane1224from fig.12) that can contain application subnets1426(e.g. similar to the application subnets)1226from fig.12), a control plane data layer1428(e.g. the data layer of the control plane1228from fig.12) that can contain database subnets1430. As LB subnets1422contained in the DMZ layer of the control plane1420can be communicatively connected to the application subnet(s).1426contained in the application layer of the control plane1424and to an Internet gateway1434(z. B. the Internet-Gateway1234from fig.12) that may be included in the VCN control plane1416and application subnets1426can be communicatively connected to the database subnet(s).1430contained in the data layer of the control plane1428and for a service gateway1436(e.g., the service gateway of FIG.12) und ein NAT-Gateway (Network Address Translation).1438(e.g. the NAT gateway1238from fig.12). The VCN control plane1416may include service gateway1436e- or NAT-Gateway1438.
The VCN data plan1418may include a data plane application layer1446(e.g. the application layer of the data plane1246from fig.12), a DMZ layer of the data plane1448(e.g. the DMZ layer of the data plane1248from fig.12) and a data layer of the data plane1450(e.g. the data layer of the data plane1250from fig.12). The DMZ layer of the data plane1448may contain LB subnets1422which can be communicatively coupled with subnets of trusted applications1460and untrusted application subnets1462from the application layer to the data layer1446and the web portal1434Content in the VCN data plan1418. Trusted application subnets1460communicatively coupled to the service gateway1436Content in the VCN data plan1418, NAT-Gateway1438Content in the VCN data plan1418and database subnets1430contained in the data layer of the data plane1450. Untrusted application subnets1462communicatively coupled to the service gateway1436Content in the VCN data plan1418and database subnets1430contained in the data layer of the data plane1450. The data layer of the data plane.1450may contain database subnets1430which can be communicatively coupled to the service gateway1436Content in the VCN data plan1418.
Untrusted application subnets1462can contain one or more primary VNICs1464(1)-(N) that can be communicatively connected to tenant virtual machines (VMs).1466(1)-(NORTH). Each tenant VM1466(1)-(N) can be communicatively coupled to a respective application subnetwork1467(1)-(N) that may be included in the container's respective output VCNs1468(1)-(N), which may be contained in the client's respective rental agreements1470(1)-(NORTH). respective secondary VNICs1472(1)-(N) can facilitate communication between subnets of untrusted applications1462Content in the VCN data plan1418and the application subnet contained in the container's egress VCNs1468(1)-(NORTH). Each container leaves the VCN1468(1)-(N) can contain a NAT gateway1438which can be communicatively linked to the public Internet1454(e.g. public internet1254from fig.12).
the internet portal1434included in the control plane VCN1416and content in the VCN data plan1418can be communicatively coupled with a metadata management service1452(e.g. the metadata management system1252from fig.12), which can be communicatively linked to the public Internet1454. public internet1454can be communicatively connected to the NAT gateway1438included in the control plane VCN1416and content in the VCN data plan1418. the service gateway1436included in the control plane VCN1416and content in the VCN data plan1418can be communicatively coupled with cloud services1456.
In some agreements, the VCN data plan1418can be integrated with customer sites1470. This integration may be useful or desirable for the IaaS provider's customers in some cases, for example in a case where assistance in running code may be required. The client may provide code for execution that may be destructive, interact with other client resources, or have undesirable effects. In response, the IaaS provider can determine whether to run code provided to the IaaS provider by the customer.
In some examples, the IaaS provider's customer may grant the IaaS provider temporary network access and request that a role be attached to the application at the data plan level.1446. The code to run the function can be run in the virtual machines1466(1)-(N) and the code cannot be configured to run anywhere else in the data plane VCN1418. Eats VM1466(1)-(N) can be connected to a client tenant1470. respective container1471(1)-(N) contained in virtual machines1466(1)-(N) can be configured to run code. In this case, there may be double insulation (e.g. container1471(1)-(N) Laufcode, wo Container1471(1)-(N) can at least be included in the VM1466(1)-(N) contained in subnets of untrusted applications1462), which can prevent buggy or unwanted code from interfering with the IaaS provider's network or another customer's network. The containers1471(1)-(N) can be communicatively connected to the location of the client1470and can be configured to send or receive data from the client's site1470. The containers1471(1)-(N) cannot be configured to send or receive data from any other entity in the VCN's data plan1418. After code execution is complete, the IaaS provider can discard or dispose of the containers.1471(1)-(NORTE).
In some embodiments, the trusted application subnets1460It can run code owned or operated by the IaaS provider. Application subnets trusted in this mode1460can be communicatively connected to the database subnet(s).1430and configured to perform CRUD operations on the database subnets1430. Untrusted application subnets1462can be communicatively connected to the database subnet(s).1430, but in this embodiment the untrusted application subnets can be configured to perform read operations on the database subnets.1430. The containers1471(1)-(N) that the VM can contain1466(1)-(N) of any client capable of executing client code must not be communicatively coupled to the database subnet(s).1430.
In other embodiments, the VCN control plane1416and VCN data plan1418they cannot be directly communicatively coupled. In this mode, there may be no direct communication between the VCN control plane1416and VCN data plan1418. However, the communication can take place indirectly via at least one method. a liquefied gas1410can be set up by the IaaS provider, which can facilitate communication between the control plane VCN1416and VCN data plan1418. In another example, the control plane VCN1416the VCN data plan1418can call cloud services1456through the service gateway1436. For example, a call to cloud services1456from the VCN control plane1416may contain a request for a service that can communicate with the data plane VCN1418.
FIGO.fifteenis a block diagram1500Illustrating another exemplary pattern of an IaaS architecture in accordance with at least one embodiment. service operator1502(e.g. service operator1202from fig.12) can be communicatively connected to a secure host location1504(e.g. secure accommodation rental1204from fig.12) that can contain a Virtual Cloud Network (VCN).1506(e.g. the VCN1206from fig.12) and a secure host subnet1508(e.g. the subnet of the secure host1208from fig.12). A VCN1506may contain a liquefied gas1510(e.g. LPG1210from fig.12) that can be communicatively connected to an SSH VCN1512(e.g. the SSH VCN1212from fig.12) through a GPL1510Content in VCN SSH1512. An SSH VCN1512may contain an SSH subnet1514(e.g. the SSH subnet1214from fig.12) and an SSH VCN1512can be communicatively connected to a VCN control plane1516(e.g. the control plane VCN1216from fig.12) through a GPL1510included in the control plane VCN1516and for a VCN data plan1518(for example the data plan1218from fig.12) through a GPL1510Content in the VCN data plan1518. The VCN control plane1516and VCN data plan1518may be included in a service lease1519(for example hiring the service1219from fig.12).
The VCN control plane1516may include a control plane DMZ layer1520(e.g. the DMZ layer of the control plane1220from fig.12) that can contain LB subnets1522(z. B. LB-Subnetze1222from fig.12), a control plane application layer1524(e.g. the application layer of the control plane1224from fig.12) that can contain application subnets1526(e.g. application subnets1226from fig.12), a control plane data layer1528(e.g. the data layer of the control plane1228from fig.12) that can contain database subnets1530(e.g. database subnets1430from fig.14). As LB subnets1522contained in the DMZ layer of the control plane1520can be communicatively connected to the application subnet(s).1526contained in the application layer of the control plane1524and to an Internet gateway1534(z. B. the Internet-Gateway1234from fig.12) that may be included in the VCN control plane1516and application subnets1526can be communicatively connected to the database subnet(s).1530contained in the data layer of the control plane1528and for a service gateway1536(e.g., the service gateway of FIG.12) und ein NAT-Gateway (Network Address Translation).1538(e.g. the NAT gateway1238from fig.12). The VCN control plane1516may include service gateway1536e- or NAT-Gateway1538.
The VCN data plan1518may include a data plane application layer1546(e.g. the application layer of the data plane1246from fig.12), a DMZ layer of the data plane1548(e.g. the DMZ layer of the data plane1248from fig.12) and a data layer of the data plane1550(e.g. the data layer of the data plane1250from fig.12). The DMZ layer of the data plane1548may contain LB subnets1522which can be communicatively coupled with subnets of trusted applications1560(e.g. trusted application subnets1460from fig.14) and untrusted application subnets1562(e.g. untrusted application subnets1462from fig.14) the application layer of the data plane1546and the web portal1534Content in the VCN data plan1518. Trusted application subnets1560communicatively coupled to the service gateway1536Content in the VCN data plan1518, NAT-Gateway1538Content in the VCN data plan1518and database subnets1530contained in the data layer of the data plane1550. Untrusted application subnets1562communicatively coupled to the service gateway1536Content in the VCN data plan1518and database subnets1530contained in the data layer of the data plane1550. The data layer of the data plane.1550may contain database subnets1530which can be communicatively coupled to the service gateway1536Content in the VCN data plan1518.
Untrusted application subnets1562can contain core VNICs1564(1)-(N) that can be communicatively connected to tenant virtual machines (VMs).1566(1)-(N) in untrusted application subnets1562. Each tenant VM1566(1)-(N) can execute code in an appropriate container1567(1)-(N) and communicatively attach to an application subnet1526which may be contained in a data plane application layer1546that may be contained in a container exit VCN1568. VNIC secondary or1572(1)-(N) can facilitate communication between subnets of untrusted applications1562Content in the VCN data plan1518and the subnet of the application contained in the container's outbound VCN1568. The container's home VCN can contain a NAT gateway1538which can be communicatively linked to the public Internet1554(e.g. public internet1254from fig.12).
the internet portal1534included in the control plane VCN1516and content in the VCN data plan1518can be communicatively coupled with a metadata management service1552(e.g. the metadata management system1252from fig.12), which can be communicatively linked to the public Internet1554. public internet1554can be communicatively connected to the NAT gateway1538included in the control plane VCN1516and content in the VCN data plan1518. the service gateway1536included in the control plane VCN1516and content in the VCN data plan1518can be communicatively coupled with cloud services1556.
In some examples, the pattern is illustrated by the block diagram architecture1500from fig.fifteencan be viewed as an exception to the pattern illustrated by the block diagram architecture1400from fig.14and may be desirable for a customer of the IaaS provider when the IaaS provider cannot communicate directly with the customer (e.g., a non-affiliated region). the respective containers1567(1)-(N) contained in virtual machines1566(1)-(N) for each client can be accessed in real time by the client. The containers1567(1)-(N) can be configured to place calls to the appropriate secondary VNICs1572(1)-(N) content in the application subnet(s)1526from the application layer to the data layer1546that may be included in the outbound VCN of the container1568. Secondary VNICs1572(1)-(N) can forward the calls to the NAT gateway1538which can transfer calls to the public Internet1554. In this example, the containers1567(1)-(N) that can be accessed by the client in real-time can be isolated from the VCN control plane1516and can be isolated from other entities contained in the VCN data plane1518. The containers1567(1)-(N) can also be isolated from other client resources.
In other examples, the client can use the containers1567(1)-(N) to access cloud services1556. In this example, the client can run containerized code1567(1)-(N) request a service from cloud services1556. The containers1567(1)-(N) can forward this request to the secondary VNICs1572(1)-(N), which can forward the request to the NAT gateway, which can forward the request to the public internet1554. public internet1554can forward the request to the LB subnet(s).1522included in the control plane VCN1516via the Internet gateway1534. In response to determining that the request is valid, the LB subnets can broadcast the request to the application subnets.1526who can forward the request to cloud services1556through the service gateway1536.
It should be noted that IaaS architectures1200,1300,1400,1500components shown in the figures may include components other than those shown. Furthermore, the embodiments shown in the figures are just a few examples of a cloud infrastructure system that may include an embodiment of the disclosure. In some other embodiments, IaaS systems may have more or fewer components than those shown in the figures, may combine two or more components, or may have a different configuration or arrangement of components.
In certain embodiments, the IaaS systems described in this document may include a range of application, middleware, and database service offerings that are provided to a customer in a self-service, subscription-based, elastically scalable, reliable, highly available, and secure manner. . An example of such an IaaS system is the Oracle Cloud Infrastructure (OCI) provided by the present assignee.
FIGO.sixteenillustrates an example of a computer system1600, in which several modalities can be implemented. The system1600it can be used to implement any of the computer systems described above. As shown in the figure, the computer system1600contains a processing unit1604which communicates with various peripheral subsystems via a bus subsystem1602. These peripheral subsystems may include a processing accelerator1606, an I/O subsystem1608, a storage subsystem1618and a communications subsystem1624. storage subsystem1618includes tangible computer-readable storage media1622and system memory1610.
Bus-Subsystem1602provides a mechanism to allow the various components and subsystems of the computer system1600communicate with each other as intended. Although the bus subsystem1602shown schematically as a single bus, alternative embodiments of the bus subsystem may use multiple buses. bus subsystem1602it can be any number of types of bus structures, including a memory bus or memory controller, a peripheral bus, and a local bus using any of a number of bus architectures. For example, such architectures may include an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an Enhanced ISA (EISA) bus, a Video Electronics Standards Association Local Bus (VESA), and a Peripheral Component Interconnect (PCI ) include ) bus that can be implemented as a mezzanine bus constructed according to the IEEE P1386.1 standard.
processing unit1604, which can be implemented as one or more integrated circuits (e.g., a conventional microprocessor or microcontroller), controls the operation of the computer system1600. One or more processors can be included in the processing unit1604. These processors can include single-core or multi-core processors. In certain embodiments, the processing unit1604can be implemented as one or more independent processing units1632I1634with single or multi-core processors contained in each processing unit. In other embodiments, the processing unit1604it can also be implemented as a quad-core processing unit, formed by integrating two dual-core processors onto a single chip.
In various embodiments, the processing unit is1604It can run a variety of programs in response to program code, and can keep multiple programs or processes running at the same time. At any given time, some or all of the code of the program to be executed may reside on the processors.1604and/or in the storage subsystem1618. Through proper programming, processors1604can provide various functions described above. computer system1600may additionally include a processing acceleration unit1606, which may include a digital signal processor (DSP), a special purpose processor, and/or the like.
E/A-Subsystem1608may include user interface input devices and user interface output devices. User interface input devices may include a keyboard, pointing devices such as a mouse or trackball, a trackpad built into a monitor or a touch screen, a scroll wheel, click wheel, dial, button, etc., a switch, a keyboard, audio input devices, voice command recognition systems, microphones and other types of input devices. User interface input devices may include, for example, motion capture and/or gesture recognition devices, such as the Microsoft Kinect® motion sensor, that allow users to control and interact with an input device, such as the Microsoft Xbox® 360 game controller, using a natural user interface gestures and spoken commands. User interface input devices may also include eye gesture recognition devices, such as the Google Glass® blink detector, which detects eye activity (e.g., input on an input device (e.g., Google Glass®). User interface input devices may also include voice recognition sensors, allowing users to to interact with voice recognition systems (e.g. Siri® browser) via voice commands.
User interface input devices may also include three-dimensional (3D) mice, joysticks or pointers, gamepads and graphics tablets, and audiovisual devices such as speakers, digital cameras, digital camcorders, video players, portable media, web cameras, image scanners, fingerprint readers, barcode readers, 3D readers, 3D printers, Laser range finders and eye tracking devices. Additionally, the user interface input devices may include, for example, medical imaging input devices such as computed tomography, magnetic resonance imaging, positional emission tomography, and medical ultrasound devices. User interface input devices may also include, for example, audio input devices such as MIDI keyboards, digital musical instruments, and the like.
User interface output devices may include a display subsystem, indicator lights, or non-visible displays such as audio output devices, and so on. The display subsystem may be a cathode ray tube (CRT), a flat panel device such as that using a liquid crystal display (LCD) or a plasma display, a projection device, a touch screen, and the like. In general, the use of the term "output device" is intended to encompass all possible types of devices and mechanisms for outputting information from the computer system.1600to a user or to another computer. For example, user interface output devices can include a variety of display devices that convey text, graphics, and audio/video information visually, such as but not limited to monitors, printers, speakers, headsets, systems, automotive navigation devices, plotters, screen readers, and modems.
computer system1600may include a storage subsystem1618consisting of software elements that are represented currently arranged in the memory of a system1610. system memory1610may store program instructions loadable and executable on the processing unit1604, as well as the data generated when these programs are run.
Depending on the configuration and type of computer system1600system memory1610it may be volatile (such as random access memory (RAM)) and/or non-volatile (such as read only memory (ROM), flash memory, etc.). RAM typically contains data and/or program modules that are directly accessible to and from/or being operated and executed by the processing unit1604. In some implementations, system memory1610It can contain several different types of memory, such as B. static random access memory (SRAM) or dynamic random access memory (DRAM). In some implementations, a basic input/output system (BIOS) that contains the basic routines that help transfer information between elements within the computer system.1600, e.g. B. during boot, can usually be stored in ROM. By way of example and not limitation, System Storage1610also illustrates application programs1612, which may include client applications, web browsers, middleware applications, relational database management systems (RDBMS), etc., program data1614and an operating system1616. For example operating system1616may run various versions of Microsoft Windows®, Apple Macintosh® and/or Linux operating systems, a variety of commercially available UNIX® or UNIX-like operating systems (including but not limited to the GNU/Linux variant of operating systems, Google Chrome ® OS and the like) and/or mobile operating systems such as iOS, Windows® Phone, Android® OS, BlackBerry® 16 OS and Palm® OS.
storage subsystem1618it may also provide a tangible computer-readable storage medium to store the basic programming and data constructs that provide the functionality of some modalities. Software (programs, code modules, instructions) that, when executed by a processor, provide the functionality described above may be stored in the memory subsystem.1618. These software modules or instructions can be executed by the processing unit1604. storage subsystem1618You may also provide a repository to store data used in accordance with this disclosure.
storage subsystem1600may also include a computer-readable storage media reader1620which can still be affixed to computer-readable storage media1622. Together and optionally in combination with system memory1610, computer-readable storage media1622may generally represent remote, local, fixed, and/or removable storage devices and media for temporarily and/or more permanently containing, storing, transmitting, and retrieving computer-readable information.
computer-readable storage media1622that contain code or portions of code may also include any suitable media known or used in the art, including storage media and communication media, such as or information storage and/or transmission technology. This may be tangible computer-readable storage media such as RAM, ROM, Electronically Erasable Programmable ROM (EEPROM), Flash memory or other storage technology, CD-ROM, Digital Versatile Disc (DVD) or other optical storage, magnetic cartridge, tape, magnetic disk or other magnetic storage device or other tangible computer-readable media. This may also include intangible computer-readable media, such as data signals, data transmissions, or other media that can be used to convey desired information and is accessible by the computer system.1600.
For example, computer-readable storage media1622may include a hard disk drive that reads from or writes to non-volatile, non-removable magnetic media, a magnetic disk drive that reads from or writes to a non-volatile removable magnetic disk, and an optical drive that reads from or writes to removable media, non-volatile optical media such as CD-ROM, DVD and Blu-Ray® Disc or other optical media. computer-readable storage media1622may include, but is not limited to, Zip® drives, flash memory cards, Universal Serial Bus (USB) flash drives, Secure Digital (SD) cards, DVDs, digital video tapes, and the like. computer-readable storage media1622can also include solid-state drives (SSDs) based on non-volatile memory such as flash memory-based SSDs, enterprise flash drives, solid-state ROMs and the like, solid-state drives (SSDs) based on volatile memory such as solid-state RAM, dynamic RAM, static RAM, DRAM-based SSDs, Magnetoresistive RAM (MRAM) SSDs, and hybrid SSDs that use a combination of DRAM-based SSDs and flash memory. Disk drives and their associated computer-readable media can provide non-transitory storage of computer-readable instructions, data structures, program modules, and other data for computer systems.1600.
communication subsystem1624provides an interface to other computer systems and networks. communication subsystem1624serves as an interface for receiving data and transmitting data to other systems in the computer system1600. For example the communication subsystem1624can activate the computer system1600to connect to one or more devices over the Internet. In some embodiments, the communications subsystem1624may include radio frequency (RF) transceiver components to access wireless voice and/or data networks (e.g. using cellular phone technology, advanced data network technology such as 3G, 4G or EDGE (enhanced data rates for evolution). Global) , Wi-Fi (IEEE 802.11 family of standards or other mobile communications technologies, or any combination thereof), Global Positioning System (GPS) receiver components, and/or other components1624It can provide wired network connectivity (e.g. Ethernet) in addition to or instead of a wireless interface.
In some embodiments, the communications subsystem1624You may also receive incoming communications in the form of structured and/or unstructured data feeds1626, Event-Streams1628, event updates1630, and the like on behalf of one or more users who may use the computer system1600.
For example the communication subsystem1624can be configured to receive data feeds1626in real time from users of social networks and/or other communication services, such as B. Twitter® feeds, Facebook® updates, web feeds such as RSS (Rich Site Summary) feeds and/or real-time updates from one or more third parties. Party sources of information.
Also the communication subsystem1624can also be configured to receive data in the form of continuous data streams, which may contain event streams1628of real-time events and/or event updates1630which can be continuous or unbounded in nature, with no explicit end. Examples of applications that generate continuous data may include, for example, sensor data applications, financial metrics, network performance measurement tools (e.g., network monitoring and traffic management applications), data flow analysis tool clicks, automobile traffic monitoring, and the like.
communication subsystem1624can also be configured to generate structured and/or unstructured data feeds1626, Event-Streams1628, event updates1630and similar to one or more databases that may be in communication with one or more communications source computers coupled to the computer system1600.
computer system1600They can be of many types, including a handheld portable device (e.g., iPhone® cellphone, iPad® computer tablet, PDA), a handheld device (e.g., a Google Glass® head-mounted display), a personal computer , a workstation, a mainframe, kiosk, server cabinet or any other data processing system.
Due to the changing nature of computers and networks, the description of the computer system1600shown in the figure is only intended to be a specific example. Many other configurations are possible with more or fewer components than the system shown in the figure. For example, custom hardware can also be used and/or specific elements can be implemented in hardware, firmware, software (including applets), or a combination. Additionally, a connection to other computing devices, such as network input/output devices, may be used. Based on the description and teachings provided herein, one skilled in the art will recognize other ways and/or methods of implementing the various embodiments.
While specific embodiments have been described, various modifications, alterations, alternative constructions, and equivalents are also included within the scope of the disclosure. The embodiments are not limited to operation within any particular computing environment, but may operate within a variety of computing environments. Furthermore, although the embodiments have been described using a particular series of transactions and steps, it should be apparent to those skilled in the art that the scope of the present disclosure is not limited to the series of transactions and steps described. Various features and aspects of the modalities described above may be used individually or in combination.
Further, while embodiments have been described using a particular combination of hardware and software, it should be recognized that other combinations of hardware and software are also within the scope of the present disclosure. The modalities can be implemented in hardware only, or in software only, or using combinations thereof. The various processes described herein can be implemented on the same processor or on different processors in any combination. Accordingly, when components or modules are described as being configured to perform particular operations, such configuration may be accomplished, for example, by designing electronic circuits to perform the operation, programming programmable electronic circuits (such as microprocessors) to perform the operation, or any other combination thereof. Processes may communicate using a variety of techniques, including but not limited to conventional inter-process communication techniques, and different pairs of processes may use different techniques, or the same pair of processes may use different techniques at different times.
Therefore, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. However, it is evident that additions, subtractions, deletions and other modifications and changes can be made without departing from the spirit and broader scope set forth in the claims. Therefore, while specific disclosure embodiments have been described, they are not intended to be limiting. Various modifications and equivalents are within the scope of the following claims.
The use of the terms "a" and "an" and "the" and similar references in connection with the description of the disclosed embodiments (particularly in connection with the following claims) are to be construed as covering both the singular and the plural, unless otherwise stated. otherwise here or clearly contradicted by the context. The terms "comprise", "have", "including" and "contain" are to be construed as open-ended (i.e. meaning "including but not limited to") unless otherwise specified. The term "connected" should be construed as including, attached or attached in part or in full, even if there is something in between. Citation of ranges of values in this document is intended only as a shorthand for individually referencing each individual value within the range, unless otherwise specified in this document, and each individual value is included in the descriptive statement as if otherwise specified of this document individually in this document. All methods described in this document can be performed in any correct order, unless otherwise noted in this document or the context clearly dictates otherwise. The use of any and all examples or illustrative terms (e.g., "like") provided herein is intended only to further clarify the embodiments and does not constitute a limitation on the scope of the description unless otherwise noted. No language in the specification should be construed as indicating anything that is not purported to be essential to the practice of the disclosure.
A disjunctive language, such as the phrase "at least one of X, Y, or Z," should, unless expressly stated otherwise, be understood within the context commonly used to convey that an item, term, etc. X , Y or Z. or any combination thereof (e.g. X, Y and/or Z). Therefore, such disjunctive language is generally not intended and should not imply that particular embodiments require that at least one of X, at least one of Y, or at least one of Z be present.
Preferred embodiments of this disclosure are described herein, including the best mode known for carrying out the disclosure. Variations from these preferred embodiments may become apparent to those skilled in the art upon reading the above description. Those skilled in the art should be able to employ such variations as appropriate, and description may be practiced otherwise than as specifically described herein. Accordingly, this disclosure includes all modifications and equivalents to the subject matter of the appended claims as permitted by applicable law. Furthermore, any combination of the elements described above in all possible variations thereof are covered by the disclosure unless otherwise specified herein.
All references, including publications, patent applications, and patents, cited herein are incorporated by reference to the same extent as if each reference were individually and specifically identified as incorporated by reference, and are set forth herein in their entirety. In the foregoing description, aspects of the disclosure are described with reference to specific embodiments thereof, but those skilled in the art will appreciate that the disclosure is not limited thereto. Various features and aspects of the disclosure described above may be used individually or in combination. In addition, the modalities may be used in any number of environments and applications beyond those described in this document without departing from the broader spirit and scope of the specification. Therefore, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
PCB layout best practices recommend that you always keep traces as shortand directas possible between components. If your component placement forces horizontal trace routing on one side of the board, then always route traces vertically on the opposite side.What are the different types of routing in PCB? ›
This should help the reader understand their benefits and where they fit in the ever more complex world of PCB layout. The two basic routing choices are maze routing and X-Y routing. Figure 1 uses a simplified “rats nest” to illustrate the basic difference between the two types of routing strategies.How far apart should traces be? ›
With digital signals, the typical rule is to simply follow the “3W” rule, where the clearance between traces is triple the width of the trace. For a typical 50 Ohm controlled impedance microstrip, your trace width will be ~20 mils, thus the recommended PCB trace spacing is 60 mils.What are optimal routes? ›
The most effective route for a vehicle or fleet of vehicles to travel is called an optimal route. This route takes into account a number of restrictions, including time periods, vehicle capacity, and road conditions.What is the regulatory limit for PCBs? ›
FDA mandates tolerances of 0.2-3.0 ppm PCBs for all foods, with a tolerance level in fish of 2 ppm. FDA also limits PCBs in paper food-packaging materials to 10 ppm [FDA 1996c]. The Food and Agriculture Organization (FAO) and the World Health Organization (WHO) allow a daily PCB intake of 6 µg/kg per day [AAP 2003].What is the rule of thumb for PCB? ›
A good rule of thumb is to keep a space of at least 40mil between the components, and at least 100mil between each component and the edge of the PCB. On the solder side of the PCB, also avoid placing components in close proximity to through-hole terminals.What is 20 h rule in PCB? ›
Abstract: The 20-H rule is a printed circuit board layout guideline. On boards with power and ground planes, the fringing field at the edges of the board is contained by backing the edge of the power plane away from the edge of the board by a distance equal to 20 times the separation distance between the planes.What are the 3 types of routing protocols? ›
Routing Information Protocol (RIP) Interior Gateway Protocol (IGRP) Open Shortest Path First (OSPF)What are the four basic routing techniques? ›
Four IGPs are the most popular:
- Open Shortest Path First (OSPF)
- Enhanced Interior Gateway Routing Protocol (EIGRP)
- Intermediate System to Intermediate System (IS-IS)
- Routing Information Protocol (RIP)
Routing protocols are mechansims by which routing information is exchanged between routers so that routing decisions can be made. In the Internet, there are three types of routing protocols commonly used. They are: distance vector, link state, and path vector.
- The service itself,
- Customer participation in the process,
- Location of service delivery,
- Level of customer contact,
- Degree of Standardization,
- Complexity of the service.
Improper soldering during the printed circuit boa assembly process can lead to major issues. One of the most common kinds of poor soldering occurs when a technician doesn't heat the solder enough, leading to cold soldering, which can cause PCB failure.What are the top 3 important steps in PCB design and layout process? ›
- design and test the prototype circuit— by hand;
- capture the circuit's schematic— using OrCAD Capture or similar software;
- perform the physical layout of the circuit— using OrCAD Layoutor similar software;
Each fabricator will have its own minimum trace width that they will build, but 3 mils is a common minimum spacing value. Copper weight also must be factored in here as well. The higher the weight, the larger the minimum spacing is needed by the fabricator to build the board.What is the minimum trace spacing for PCB? ›
With the 40V/mil criterion, the required minimum distance would be 2461/40=62 mils (or 1.6 mm). For products that are not covered by UL60950-1 safety standard, to determine the electrical clearances the designers normally consult with IPC-2221.What is PCB spacing? ›
The PCB spacing defines the extra space you want between different PCB's on the customer panel. We apply this mainly when using components which are exceeding the board edge.What is the standard route? ›
A standard route is a fixed route that is traveled with a particular frequency, such as a truck that visits delivery addresses according to a fixed schedule, a rail service, or a boat service.How do you optimize a route plan? ›
- Tip 1: Optimize Vehicle Capacity, Utilization, and Loading.
- Tip 2: Monitor Drivers' Availability and Requirements by Location.
- Tip 3: Increase Cross-Fleet Productivity.
- Tip 4: Minimize Mis-Deliveries and Re-Deliveries.
- Tip 5: Maintain Consistent Customer Communication.
Taproots and fibrous roots are the two main types of root systems. In a taproot system, a main root grows vertically downward with a few lateral roots. Fibrous root systems arise at the base of the stem, where a cluster of roots forms a dense network that is shallower than a taproot.Are PCBs regulated by OSHA? ›
Presently, there are no comprehensive OSHA regulations concerning PCB exposure. However, OSHA has set permissible exposure limits (PELs) for PCBs of 42% (concentration) at 1 milligram per cubic meter (mg/m(3) and PCBs of 54% (concentration) at 0.5 mg/m(3).
Commercial production of PCBs ended in 1977 because of health effects associated with exposure. In 1979, the U.S. Environmental Protection Agency (USEPA) banned the use of PCBs; however, PCBs are still present in many pre-1979 products.Are PCBs banned in the US? ›
PCBs, or polychlorinated biphenyls, are industrial products or chemicals. PCB chemicals were banned in the U.S. in 1979 because these chemicals harm human and environmental health.What is the 3W rule in PCB routing? ›
The 3W rule for crosstalk states that parallel traces should be at least 3W away from each other, measured from center to center of the trace, to minimize coupling between them, where W = width of your trace.What is 3W rule in PCB design? ›
The '3W' Rule(s)
The first version of the 3W rule states the spacing between adjacent traces should be at least 3x the width of the traces. The goal is to minimize magnetic flux between traces. The logic states that minimizing magnetic flux between traces thus minimizes inductive crosstalk.
Accurate reliability prediction for MTBF (mean-time-between-failures) is always desirable and anticipated before the new product is ramped up for customer shipment.What is the current PCB rate? ›
You can calculate maximum current by using the formula A = (T x W x 1.378 [mils/oz/ft2]).What is considered high speed PCB? ›
What is a high-speed signal in a PCB? Signals with frequencies ranging from 50 MHz to as high as 3 GHz are considered high-speed signals such as clock signals. Ideally, a clock signal is a square wave, but it is practically impossible to change its 'LOW' level to 'HIGH' level (and vice versa) instantly.What is the most commonly used routing protocol? ›
Border Gateway Protocol, or BGP, and Open Shortest Path First, or OSPF, are two of the most popular, standards-based dynamic routing protocols used around the world.What is the difference between routing and routing protocol? ›
All hosts on an internetwork (routers, servers, and workstations) can utilize the services of a routed protocol. A routing protocol, on the other hand, is only used between routers. Its purpose is to help routers building and maintain routing tables.What are the basic routing requirements? ›
- Routing information protocol (RIP) ...
- Interior gateway protocol (IGRP) ...
- Enhanced interior gateway routing protocol (EIGRP) ...
- Open shortest path first (OSPF) ...
- Exterior Gateway Protocol (EGP) ...
- Border gateway protocol (BGP)
A routing instruction is a set of instructions that helps streamline your logistic process, reduce friction and cost, and enhance relationships between you supply chain's various parties (supplier, logistic team, warehouse team, purchasing team).What are the 12 steps in design process? ›
- Define The Problem. ...
- Brainstorm Possible Solutions. ...
- Research Ideas / Explore Possibilities for your Engineering Design Project. ...
- Establish Criteria and Constraints. ...
- Consider Alternative Solutions. ...
- Select An Approach. ...
- Develop A Design Proposal. ...
- Make A Model Or Prototype.
The design process consists of 6 iterative phases: focus, understand, define, conceive, build and test. Since the first descriptions of design-based research (DBR), there have been continued calls to better define DBR and increase its rigor.What is main PCB failure? ›
Common Failure Modes of Printed Circuit Board Assemblies
Soldering Issues. Chemical (Fluid) Leakage. Component Barrier Breakage. Bad Component Placement. Burned Components.
Electromagnetic compatibility (EMC) and electromagnetic interference (EMI) are two issues that are common on PCBs.What is general guidelines for designing the PCB? ›
PCB layout best practices recommend that you always keep traces as shortand directas possible between components. If your component placement forces horizontal trace routing on one side of the board, then always route traces vertically on the opposite side.What are the 3 methods of PCB designing? ›
PCB Manufacturing Process
Once you have the Film Diagram of the Circuit Board, the Next Steps are: Step 1: Print from File to Film. Step-2: Patterning or Etching. Step-3: Photoengraving.
- Select the Route. ...
- Next, press Spacebar to cycle through the available routable layers. ...
- Once your layer is selected, left-click on your first net to start your connection. ...
- You can now drag your mouse towards the next net, and your trace will follow your cursor.
PCB assemblies shall be transport outside of an EPA only in Electrostatic Shielding packaging. All other packaging, electrostatic dissipative material (old: low charging) are not able for transportation of ESDS or PCBs with ESDS. The best way are black conductive boxes with lid or Electrostatic Shielding Bags.
Use the free multimeter's probe to touch a single spot on the board. Then using your finger with the foil move it over the components touching the soldered parts. Thus you will cover more area quicker and when you hear the multimeter beeping - you have found your track on the PCB.What is the difference between auto routing and manual routing? ›
While manual routing gives a designer a certain amount of satisfaction and “feel” when visualizing and building a complex layout, automatic routing saves time by making decisions about tracks and vias.Why is routing done in PCB? ›
Routing Rules for Modern PCBs
Today's boards, even those that only use a simple MCU and low-power stages, require some level of trace design and routing rules to ensure signal integrity. Designers need to determine the trace geometry requirements for their connections to ensure reliability and signal integrity.
The short answer is, the PCB should always have a chassis ground connection somewhere, having it float is not a great idea. If there is no 'chassis', there should be some kind of shield ground that is split from power such that not much noise can flow between them.Does my PCB need a ground plane? ›
Most PCBs do not NEED a ground plane. It provides a low impedance path for both power and signal return. Particularly in high speed logic or RF circuits, a ground plane is necessary. If your design has neither, then a ground or power plane is not needed.What is difference between PCB and PCBA? ›
A PCB is a blank circuit board with no electronic components attached, while a PCBA is a completed assembly that contains all of the components required for the board to function as needed for the desired application. A PCB is not yet functional, while a PCBA is ready to be used in an electronic device.What is the rule of thumb for trace width? ›
For most manufacturers, the minimum trace width should be 6mil or 0.152mm. That limitation comes from their manufacturing (etching) processes and the target yield. But to have some tolerance, we generally use 10-12 mil or 0.254-0.3 mm traces.What is the rule of thumb for PCB trace width? ›
4.1 PCB Trace Width Rule of Thumb
The minimum trace route width to apply is 1.0 mm/A. It is applicable for 1.0 oz/ft2 of copper thickness, commonly used for various PCBs.