mirror of
https://github.com/lxc/incus.git
synced 2026-02-05 09:46:19 +01:00
@@ -73,7 +73,7 @@ And adds support for the following HTTP header on PUT requests:
|
||||
|
||||
* If-Match (ETag value retrieved through previous GET)
|
||||
|
||||
This makes it possible to GET a Incus object, modify it and PUT it without
|
||||
This makes it possible to GET an Incus object, modify it and PUT it without
|
||||
risking to hit a race condition where Incus or another client modified the
|
||||
object in the meantime.
|
||||
|
||||
@@ -214,7 +214,7 @@ Rules necessary for `dnsmasq` to work (DHCP/DNS) will always be applied if
|
||||
|
||||
## `network_routes`
|
||||
|
||||
Introduces `ipv4.routes` and `ipv6.routes` which allow routing additional subnets to a Incus bridge.
|
||||
Introduces `ipv4.routes` and `ipv6.routes` which allow routing additional subnets to an Incus bridge.
|
||||
|
||||
## `storage`
|
||||
|
||||
@@ -408,7 +408,7 @@ and `xfs`.
|
||||
|
||||
## `resources`
|
||||
|
||||
This adds support for querying a Incus daemon for the system resources it has
|
||||
This adds support for querying an Incus daemon for the system resources it has
|
||||
available.
|
||||
|
||||
## `kernel_limits`
|
||||
@@ -465,7 +465,7 @@ This makes it possible to retrieve symlinks using the file API.
|
||||
## `network_leases`
|
||||
|
||||
Adds a new `/1.0/networks/NAME/leases` API endpoint to query the lease database on
|
||||
bridges which run a Incus-managed DHCP server.
|
||||
bridges which run an Incus-managed DHCP server.
|
||||
|
||||
## `unix_device_hotplug`
|
||||
|
||||
@@ -1004,7 +1004,7 @@ redirect file-system mounts to their fuse implementation. To this end, set e.g.
|
||||
|
||||
## `container_disk_ceph`
|
||||
|
||||
This allows for existing a Ceph RBD or CephFS to be directly connected to a Incus container.
|
||||
This allows for existing a Ceph RBD or CephFS to be directly connected to an Incus container.
|
||||
|
||||
## `virtual-machines`
|
||||
|
||||
@@ -2222,7 +2222,7 @@ This adds the possibility to import ISO images as custom storage volumes.
|
||||
This adds the `--type` flag to [`incus storage volume import`](incus_storage_volume_import.md).
|
||||
|
||||
## `network_allocations`
|
||||
This adds the possibility to list a Incus deployment's network allocations.
|
||||
This adds the possibility to list an Incus deployment's network allocations.
|
||||
|
||||
Through the [`incus network list-allocations`](incus_network_list-allocations.md) command and the `--project <PROJECT> | --all-projects` flags,
|
||||
you can list all the used IP addresses, hardware addresses (for instances), resource URIs and whether it uses NAT for
|
||||
|
||||
@@ -37,7 +37,7 @@ any backward compatibility to broken protocol or ciphers.
|
||||
(authentication-trusted-clients)=
|
||||
### Trusted TLS clients
|
||||
|
||||
You can obtain the list of TLS certificates trusted by a Incus server with [`incus config trust list`](incus_config_trust_list.md).
|
||||
You can obtain the list of TLS certificates trusted by an Incus server with [`incus config trust list`](incus_config_trust_list.md).
|
||||
|
||||
Trusted clients can be added in either of the following ways:
|
||||
|
||||
@@ -101,7 +101,7 @@ To enable PKI mode, complete the following steps:
|
||||
1. Place the certificates issued by the CA on the clients and the server, replacing the automatically generated ones.
|
||||
1. Restart the server.
|
||||
|
||||
In that mode, any connection to a Incus daemon will be done using the
|
||||
In that mode, any connection to an Incus daemon will be done using the
|
||||
pre-seeded CA certificate.
|
||||
|
||||
If the server certificate isn't signed by the CA, the connection will simply go through the normal authentication mechanism.
|
||||
@@ -122,7 +122,7 @@ Any user that authenticates through the configured OIDC Identity Provider gets f
|
||||
To configure Incus to use OIDC authentication, set the [`oidc.*`](server-options-oidc) server configuration options.
|
||||
Your OIDC provider must be configured to enable the [Device Authorization Grant](https://oauth.net/2/device-flow/) type.
|
||||
|
||||
To add a remote pointing to a Incus server configured with OIDC authentication, run [`incus remote add <remote_name> <remote_address>`](incus_remote_add.md).
|
||||
To add a remote pointing to an Incus server configured with OIDC authentication, run [`incus remote add <remote_name> <remote_address>`](incus_remote_add.md).
|
||||
You are then prompted to authenticate through your web browser, where you must confirm the device code that Incus uses.
|
||||
The Incus client then retrieves and stores the access and refresh tokens and provides those to Incus for all interactions.
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
(backups)=
|
||||
# How to back up a Incus server
|
||||
# How to back up an Incus server
|
||||
|
||||
In a production setup, you should always back up the contents of your Incus server.
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ With a database, you can run a simple query on the database to retrieve this inf
|
||||
|
||||
## Cowsql
|
||||
|
||||
In a Incus cluster, all members of the cluster must share the same database state.
|
||||
In an Incus cluster, all members of the cluster must share the same database state.
|
||||
Therefore, Incus uses [Cowsql](https://github.com/cowsql/cowsql), a distributed version of SQLite.
|
||||
Cowsql provides replication, fault-tolerance, and automatic failover without the need of external database processes.
|
||||
|
||||
|
||||
@@ -12,7 +12,7 @@ If you want to quickly set up a basic Incus cluster, check out [MicroCloud](http
|
||||
(clustering-members)=
|
||||
## Cluster members
|
||||
|
||||
A Incus cluster consists of one bootstrap server and at least two further cluster members.
|
||||
An Incus cluster consists of one bootstrap server and at least two further cluster members.
|
||||
It stores its state in a [distributed database](../database.md), which is a [Cowsql](https://github.com/cowsql/cowsql/) database replicated using the Raft algorithm.
|
||||
|
||||
While you could create a cluster with only two members, it is strongly recommended that the number of cluster members be at least three.
|
||||
@@ -116,7 +116,7 @@ The special value of `-1` can be used to have the image copied to all cluster me
|
||||
(cluster-groups)=
|
||||
## Cluster groups
|
||||
|
||||
In a Incus cluster, you can add members to cluster groups.
|
||||
In an Incus cluster, you can add members to cluster groups.
|
||||
You can use these cluster groups to launch instances on a cluster member that belongs to a subset of all available members.
|
||||
For example, you could create a cluster group for all members that have a GPU and then launch all instances that require a GPU on this cluster group.
|
||||
|
||||
|
||||
@@ -10,7 +10,7 @@ For example, projects can be useful in the following scenarios:
|
||||
You want to keep these instances separate to make it easier to locate and maintain them, and you might want to reuse the same instance names in each customer project for consistency reasons.
|
||||
Each instance in a customer project should use the same base configuration (for example, networks and storage), but the configuration might differ between customer projects.
|
||||
|
||||
In this case, you can create a Incus project for each customer project (thus each group of instances) and use different profiles, networks, and storage for each Incus project.
|
||||
In this case, you can create an Incus project for each customer project (thus each group of instances) and use different profiles, networks, and storage for each Incus project.
|
||||
- Your Incus server is shared between multiple users.
|
||||
Each user runs their own instances, and might want to configure their own profiles.
|
||||
You want to keep the user instances confined, so that each user can interact only with their own instances and cannot see the instances created by other users.
|
||||
|
||||
@@ -47,9 +47,9 @@ Use either of the following methods to grant the required permissions:
|
||||
Privileged containers do not have this issue because all UID/GID in the container are the same as outside.
|
||||
But that's also the cause of most of the security issues with such privileged containers.
|
||||
|
||||
## How can I run Docker inside a Incus container?
|
||||
## How can I run Docker inside an Incus container?
|
||||
|
||||
To run Docker inside a Incus container, set the {config:option}`instance-security:security.nesting` property of the container to `true`:
|
||||
To run Docker inside an Incus container, set the {config:option}`instance-security:security.nesting` property of the container to `true`:
|
||||
|
||||
incus config set <container> security.nesting true
|
||||
|
||||
@@ -74,7 +74,7 @@ Various configuration files are stored in that directory, for example:
|
||||
## Why can I not ping my Incus instance from another host?
|
||||
|
||||
Many switches do not allow MAC address changes, and will either drop traffic with an incorrect MAC or disable the port totally.
|
||||
If you can ping a Incus instance from the host, but are not able to ping it from a different host, this could be the cause.
|
||||
If you can ping an Incus instance from the host, but are not able to ping it from a different host, this could be the cause.
|
||||
|
||||
The way to diagnose this problem is to run a `tcpdump` on the uplink and you will see either ``ARP Who has `xx.xx.xx.xx` tell `yy.yy.yy.yy` ``, with you sending responses but them not getting acknowledged, or ICMP packets going in and out successfully, but never being received by the other host.
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
(cluster-form)=
|
||||
# How to form a cluster
|
||||
|
||||
When forming a Incus cluster, you start with a bootstrap server.
|
||||
When forming an Incus cluster, you start with a bootstrap server.
|
||||
This bootstrap server can be an existing Incus server or a newly installed one.
|
||||
|
||||
After initializing the bootstrap server, you can join additional servers to the cluster.
|
||||
|
||||
@@ -129,7 +129,7 @@ When you upgrade the last member, the blocked members will notice that all serve
|
||||
|
||||
## Update the cluster certificate
|
||||
|
||||
In a Incus cluster, the API on all servers responds with the same shared certificate, which is usually a standard self-signed certificate with an expiry set to ten years.
|
||||
In an Incus cluster, the API on all servers responds with the same shared certificate, which is usually a standard self-signed certificate with an expiry set to ten years.
|
||||
|
||||
The certificate is stored at `/var/lib/incus/cluster.crt` and is the same on all cluster members.
|
||||
|
||||
|
||||
@@ -57,7 +57,7 @@ See [`incus image import --help`](incus_image_import.md) for all available flags
|
||||
### Import from a file on a remote web server
|
||||
|
||||
You can import image files from a remote web server by URL.
|
||||
This method is an alternative to running a Incus server for the sole purpose of distributing an image to users.
|
||||
This method is an alternative to running an Incus server for the sole purpose of distributing an image to users.
|
||||
It only requires a basic web server with support for custom headers (see {ref}`images-copy-http-headers`).
|
||||
|
||||
The image files must be provided as unified images (see {ref}`image-format-unified`).
|
||||
|
||||
@@ -40,7 +40,7 @@ The URL must use HTTPS.
|
||||
### Add a remote Incus server
|
||||
|
||||
<!-- Include start add remotes -->
|
||||
To add a Incus server as a remote, enter the following command:
|
||||
To add an Incus server as a remote, enter the following command:
|
||||
|
||||
incus remote add <remote_name> <IP|FQDN|URL> [flags]
|
||||
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
(import-machines-to-instances)=
|
||||
# How to import physical or virtual machines to Incus instances
|
||||
|
||||
Incus provides a tool (`incus-migrate`) to create a Incus instance based on an existing disk or image.
|
||||
Incus provides a tool (`incus-migrate`) to create an Incus instance based on an existing disk or image.
|
||||
|
||||
You can run the tool on any Linux machine.
|
||||
It connects to a Incus server and creates a blank instance, which you can configure during or after the migration.
|
||||
It connects to an Incus server and creates a blank instance, which you can configure during or after the migration.
|
||||
The tool then copies the data from the disk or image that you provide to the instance.
|
||||
|
||||
```{note}
|
||||
@@ -51,7 +51,7 @@ The tool can create both containers and virtual machines:
|
||||
</details>
|
||||
````
|
||||
|
||||
Complete the following steps to migrate an existing machine to a Incus instance:
|
||||
Complete the following steps to migrate an existing machine to an Incus instance:
|
||||
|
||||
1. Download the `bin.linux.incus-migrate` tool ([`bin.linux.incus-migrate.aarch64`](https://github.com/lxc/incus/releases/latest/download/bin.linux.incus-migrate.aarch64) or [`bin.linux.incus-migrate.x86_64`](https://github.com/lxc/incus/releases/latest/download/bin.linux.incus-migrate.x86_64)) from the **Assets** section of the latest [Incus release](https://github.com/lxc/incus/releases).
|
||||
1. Place the tool on the machine that you want to use to create the instance.
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
(initialize)=
|
||||
# How to initialize Incus
|
||||
|
||||
Before you can create a Incus instance, you must configure and initialize Incus.
|
||||
Before you can create an Incus instance, you must configure and initialize Incus.
|
||||
|
||||
## Interactive configuration
|
||||
|
||||
@@ -120,7 +120,7 @@ Failure modes when overwriting entities are the same as for the `PUT` requests i
|
||||
|
||||
```{note}
|
||||
The rollback process might potentially fail, although rarely (typically due to backend bugs or limitations).
|
||||
You should therefore be careful when trying to reconfigure a Incus daemon via preseed.
|
||||
You should therefore be careful when trying to reconfigure an Incus daemon via preseed.
|
||||
```
|
||||
|
||||
### Default profile
|
||||
|
||||
@@ -18,7 +18,7 @@ Image
|
||||
Unless the image is available locally, you must specify the name of the image server and the name of the image (for example, `images:ubuntu/22.04` for the official 22.04 Ubuntu image).
|
||||
|
||||
Instance name
|
||||
: Instance names must be unique within a Incus deployment (also within a cluster).
|
||||
: Instance names must be unique within an Incus deployment (also within a cluster).
|
||||
See {ref}`instance-properties` for additional requirements.
|
||||
|
||||
Flags
|
||||
|
||||
@@ -11,7 +11,7 @@ If you want to directly route external addresses to specific Incus servers or in
|
||||
Incus will then act as a BGP peer and advertise relevant routes and next hops to external routers, for example, your network router.
|
||||
It automatically establishes sessions with upstream BGP routers and announces the addresses and subnets that it's using.
|
||||
|
||||
The BGP server feature can be used to allow a Incus server or cluster to directly use internal/external address space by getting the specific subnets or addresses routed to the correct host.
|
||||
The BGP server feature can be used to allow an Incus server or cluster to directly use internal/external address space by getting the specific subnets or addresses routed to the correct host.
|
||||
This way, traffic can be forwarded to the target instance.
|
||||
|
||||
For bridge networks, the following addresses and networks are being advertised:
|
||||
|
||||
@@ -117,7 +117,7 @@ There are different ways of working around this problem:
|
||||
|
||||
Uninstall Docker
|
||||
: The easiest way to prevent such issues is to uninstall Docker from the system that runs Incus and restart the system.
|
||||
You can run Docker inside a Incus container or virtual machine instead.
|
||||
You can run Docker inside an Incus container or virtual machine instead.
|
||||
|
||||
Enable IPv4 forwarding
|
||||
: If uninstalling Docker is not an option, enabling IPv4 forwarding before the Docker service starts will prevent Docker from modifying the global FORWARD policy.
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
# How to integrate with `systemd-resolved`
|
||||
|
||||
If the system that runs Incus uses `systemd-resolved` to perform DNS lookups, you should notify `resolved` of the domains that Incus can resolve.
|
||||
To do so, add the DNS servers and domains provided by a Incus network bridge to the `resolved` configuration.
|
||||
To do so, add the DNS servers and domains provided by an Incus network bridge to the `resolved` configuration.
|
||||
|
||||
```{note}
|
||||
The `dns.mode` option (see {ref}`network-bridge-options`) must be set to `managed` or `dynamic` if you want to use this feature.
|
||||
|
||||
@@ -47,7 +47,7 @@ If you do not specify a `--type` argument, the default type of `bridge` is used.
|
||||
(network-create-cluster)=
|
||||
### Create a network in a cluster
|
||||
|
||||
If you are running a Incus cluster and want to create a network, you must create the network for each cluster member separately.
|
||||
If you are running an Incus cluster and want to create a network, you must create the network for each cluster member separately.
|
||||
The reason for this is that the network configuration, for example, the name of the parent network interface, might be different between cluster members.
|
||||
|
||||
Therefore, you must first create a pending network on each member with the `--target=<cluster_member>` flag and the appropriate configuration for the member.
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
You can increase the network bandwidth of your Incus setup by configuring the transmit queue length (`txqueuelen`).
|
||||
This change makes sense in the following scenarios:
|
||||
|
||||
- You have a NIC with 1 GbE or higher on a Incus host with a lot of local activity (instance-instance connections or host-instance connections).
|
||||
- You have a NIC with 1 GbE or higher on an Incus host with a lot of local activity (instance-instance connections or host-instance connections).
|
||||
- You have an internet connection with 1 GbE or higher on your Incus host.
|
||||
|
||||
The more instances you use, the more you can benefit from this tweak.
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
(network-ipam)=
|
||||
# How to display IPAM information of a Incus deployment
|
||||
# How to display IPAM information of an Incus deployment
|
||||
|
||||
{abbr}`IPAM (IP Address Management)` is a method used to plan, track, and manage the information associated with a computer network's IP address space. In essence, it's a way of organizing, monitoring, and manipulating the IP space in a network.
|
||||
|
||||
@@ -33,4 +33,4 @@ The resulting output will look something like this:
|
||||
|
||||
Each listed entry lists the IP address (in CIDR notation) of one of the following Incus entities: `network`, `network-forward`, `network-load-balancer`, and `instance`.
|
||||
An entry contains an IP address using the CIDR notation.
|
||||
It also contains a Incus resource URI, the type of the entity, whether it is in NAT mode, and the hardware address (only for the `instance` entity).
|
||||
It also contains an Incus resource URI, the type of the entity, whether it is in NAT mode, and the hardware address (only for the `instance` entity).
|
||||
|
||||
@@ -42,9 +42,9 @@ Complete the following steps to create a standalone OVN network that is connecte
|
||||
+------+---------+---------------------+----------------------------------------------+-----------+-----------+
|
||||
```
|
||||
|
||||
## Set up a Incus cluster on OVN
|
||||
## Set up an Incus cluster on OVN
|
||||
|
||||
Complete the following steps to set up a Incus cluster that uses an OVN network.
|
||||
Complete the following steps to set up an Incus cluster that uses an OVN network.
|
||||
|
||||
Just like Incus, the distributed database for OVN must be run on a cluster that consists of an odd number of members.
|
||||
The following instructions use the minimum of three servers, which run both the distributed database for OVN and the OVN controller.
|
||||
@@ -119,7 +119,7 @@ In addition, you can add any number of servers to the Incus cluster that run onl
|
||||
external_ids:ovn-encap-type=geneve \
|
||||
external_ids:ovn-encap-ip=<local>
|
||||
|
||||
1. Create a Incus cluster by running `incus admin init` on all machines.
|
||||
1. Create an Incus cluster by running `incus admin init` on all machines.
|
||||
On the first machine, create the cluster.
|
||||
Then join the other machines with tokens by running [`incus cluster add <machine_name>`](incus_cluster_add.md) on the first machine and specifying the token when initializing Incus on the other machine.
|
||||
1. On the first machine, create and configure the uplink network:
|
||||
|
||||
@@ -8,7 +8,7 @@ Network zones are available for the {ref}`network-ovn` and the {ref}`network-bri
|
||||
Network zones can be used to serve DNS records for Incus networks.
|
||||
|
||||
You can use network zones to automatically maintain valid forward and reverse records for all your instances.
|
||||
This can be useful if you are operating a Incus cluster with multiple instances across many networks.
|
||||
This can be useful if you are operating an Incus cluster with multiple instances across many networks.
|
||||
|
||||
Having DNS records for each instance makes it easier to access network services running on an instance.
|
||||
It is also important when hosting, for example, an outbound SMTP service.
|
||||
@@ -101,7 +101,7 @@ To make use of network zones, you must enable the built-in DNS server.
|
||||
To do so, set the {config:option}`server-core:core.dns_address` configuration option to a local address on the Incus server.
|
||||
To avoid conflicts with an existing DNS we suggest not using the port 53.
|
||||
This is the address on which the DNS server will listen.
|
||||
Note that in a Incus cluster, the address may be different on each cluster member.
|
||||
Note that in an Incus cluster, the address may be different on each cluster member.
|
||||
|
||||
```{note}
|
||||
The built-in DNS server supports only zone transfers through AXFR.
|
||||
|
||||
@@ -51,7 +51,7 @@ This is usually achieved by having some users be a member of the `incus` group b
|
||||
|
||||
Make sure that all user accounts that you want to be able to use Incus are a member of this group.
|
||||
|
||||
Once a member of the group issues a Incus command, Incus creates a confined project for this user and switches to this project.
|
||||
Once a member of the group issues an Incus command, Incus creates a confined project for this user and switches to this project.
|
||||
If Incus has not been {ref}`initialized <initialize>` at this point, it is automatically initialized (with the default settings).
|
||||
|
||||
If you want to customize the project settings, for example, to impose limits or restrictions, you can do so after the project has been created.
|
||||
|
||||
@@ -155,7 +155,7 @@ Use the existing Ceph Object Gateway `https://www.example.com/radosgw` to create
|
||||
(storage-pools-cluster)=
|
||||
### Create a storage pool in a cluster
|
||||
|
||||
If you are running a Incus cluster and want to add a storage pool, you must create the storage pool for each cluster member separately.
|
||||
If you are running an Incus cluster and want to add a storage pool, you must create the storage pool for each cluster member separately.
|
||||
The reason for this is that the configuration, for example, the storage location or the size of the pool, might be different between cluster members.
|
||||
|
||||
Therefore, you must first create a pending storage pool on each member with the `--target=<cluster_member>` flag and the appropriate configuration for the member.
|
||||
|
||||
@@ -336,7 +336,7 @@ Because group membership is normally only applied at login, you might need to ei
|
||||
## Upgrade Incus
|
||||
|
||||
After upgrading Incus to a newer version, Incus might need to update its database to a new schema.
|
||||
This update happens automatically when the daemon starts up after a Incus upgrade.
|
||||
This update happens automatically when the daemon starts up after an Incus upgrade.
|
||||
A backup of the database before the update is stored in the same location as the active database (at `/var/lib/incus/database`).
|
||||
|
||||
```{important}
|
||||
|
||||
@@ -4,19 +4,19 @@
|
||||
Incus provides tools and functionality to migrate instances in different contexts.
|
||||
|
||||
Migrate existing Incus instances between servers
|
||||
: The most basic kind of migration is if you have a Incus instance on one server and want to move it to a different Incus server.
|
||||
: The most basic kind of migration is if you have an Incus instance on one server and want to move it to a different Incus server.
|
||||
For virtual machines, you can do that as a live migration, which means that you can migrate your VM while it is running and there will be no downtime.
|
||||
|
||||
See {ref}`move-instances` for more information.
|
||||
|
||||
Migrate physical or virtual machines to Incus instances
|
||||
: If you have an existing machine, either physical or virtual (VM or container), you can use the `incus-migrate` tool to create a Incus instance based on your existing machine.
|
||||
: If you have an existing machine, either physical or virtual (VM or container), you can use the `incus-migrate` tool to create an Incus instance based on your existing machine.
|
||||
The tool copies the provided partition, disk or image to the Incus storage pool of the provided Incus server, sets up an instance using that storage and allows you to configure additional settings for the new instance.
|
||||
|
||||
See {ref}`import-machines-to-instances` for more information.
|
||||
|
||||
Migrate instances from LXC to Incus
|
||||
: If you are using LXC and want to migrate all or some of your LXC containers to a Incus installation on the same machine, you can use the `lxc-to-incus` tool.
|
||||
: If you are using LXC and want to migrate all or some of your LXC containers to an Incus installation on the same machine, you can use the `lxc-to-incus` tool.
|
||||
The tool analyzes the LXC configuration and copies the data and configuration of your existing LXC containers into new Incus containers.
|
||||
|
||||
See {ref}`migrate-from-lxc` for more information.
|
||||
|
||||
@@ -10,7 +10,7 @@ See [`www.ovn.org`](https://www.ovn.org/) for more information.
|
||||
The `ovn` network type allows to create logical networks using the OVN {abbr}`SDN (software-defined networking)`.
|
||||
This kind of network can be useful for labs and multi-tenant environments where the same logical subnets are used in multiple discrete networks.
|
||||
|
||||
A Incus OVN network can be connected to an existing managed {ref}`network-bridge` or {ref}`network-physical` to gain access to the wider network.
|
||||
An Incus OVN network can be connected to an existing managed {ref}`network-bridge` or {ref}`network-physical` to gain access to the wider network.
|
||||
By default, all connections from the OVN logical networks are NATed to an IP allocated from the uplink network.
|
||||
|
||||
See {ref}`network-ovn-setup` for basic instructions for setting up an OVN network.
|
||||
|
||||
@@ -21,7 +21,7 @@ Simple streams servers
|
||||
Public Incus servers
|
||||
: Incus servers that are used solely to serve images and do not run instances themselves.
|
||||
|
||||
To make a Incus server publicly available over the network on port 8443, set the {config:option}`server-core:core.https_address` configuration option to `:8443` and do not configure any authentication methods (see {ref}`server-expose` for more information).
|
||||
To make an Incus server publicly available over the network on port 8443, set the {config:option}`server-core:core.https_address` configuration option to `:8443` and do not configure any authentication methods (see {ref}`server-expose` for more information).
|
||||
Then set the images that you want to share to `public`.
|
||||
|
||||
Incus servers
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
(server-settings)=
|
||||
# Server settings for a Incus production setup
|
||||
# Server settings for an Incus production setup
|
||||
|
||||
To allow your Incus server to run a large number of instances, configure the following settings to avoid hitting server limits.
|
||||
|
||||
|
||||
@@ -50,7 +50,7 @@ The `ceph` driver in Incus uses RBD images for images, and snapshots and clones
|
||||
|
||||
<!-- Include start Ceph driver control -->
|
||||
Incus assumes that it has full control over the OSD storage pool.
|
||||
Therefore, you should never maintain any file system entities that are not owned by Incus in a Incus OSD storage pool, because Incus might delete them.
|
||||
Therefore, you should never maintain any file system entities that are not owned by Incus in an Incus OSD storage pool, because Incus might delete them.
|
||||
<!-- Include end Ceph driver control -->
|
||||
|
||||
Due to the way copy-on-write works in Ceph RBD, parent RBD images can't be removed until all children are gone.
|
||||
|
||||
@@ -12,7 +12,7 @@ For example, you can manage instances or update the server configuration on the
|
||||
|
||||
## Authentication
|
||||
|
||||
To be able to add a Incus server as a remote server, the server's API must be exposed, which means that its {config:option}`server-core:core.https_address` server configuration option must be set.
|
||||
To be able to add an Incus server as a remote server, the server's API must be exposed, which means that its {config:option}`server-core:core.https_address` server configuration option must be set.
|
||||
|
||||
When adding the server, you must then authenticate with it using the chosen method for {ref}`authentication`.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user