Assign a placement group

Add one or more compute instances to your placement group. Check out our example API workflow to create a placement group and add compute instances.

linode-cli placement assign-linode 528 \ --linodes 123 456 \ --non-compliant true
Learn more...
linodes:read_write
Learn more...
Path Params
string
required

Enum Call either the v4 URL, or v4beta for operations still in Beta.

integer
required

ID of the placement group to look up. Run the List placement groups operation and store the id for the applicable placement group.

Body Params

The compute instances you want to add to your placement group.

linodes
array of integers

The linodeId values for individual compute instances included in the placement group.

linodes
Responses

Response body
object
integer

The placement group's ID. You need to provide it for all operations that affect it.

boolean

Whether all of the compute instances in your placement group are compliant. If true, all compute instances meet either the grouped-together or spread-apart model, which you determine through your selected placement_group_type. If false, a compute instance is out of this compliance. For example, assume you've set anti-affinity:local as your placement_group_type and your group already has three qualifying compute instances on separate hosts, to support the spread-apart model. If a fourth compute instance is assigned that's on the same host as one of the existing three, the placement group is non-compliant. Enforce compliance in your group by setting a placement_group_policy.

📘

Fixing compliance is not self-service. You need to wait for our assistance to physically move compute instances to make the group compliant again.

string
length ≥ 1

Filterable The unique name set for the placement group. A label has these constraints:

  • It needs to begin and end with an alphanumeric character.
  • It can only consist of alphanumeric characters, hyphens (-), underscores (_) or periods (.).
members
array of objects

An array of compute instances included in the placement group.

members
object
boolean

The compliance status of each individual compute instance in the placement group.

integer

Read-only The unique identifier for a compute instance included in the placement group.

string

How requests to add future compute instances to your placement group are handled, and whether it remains compliant:

  • strict. Don't assign a new compute instance if it breaks the grouped-together or spread-apart model set by the placement_group_type. Use this to ensure the placement group stays compliant (is_compliant: true).
  • flexible. Assign a new compute instance, even if it breaks the grouped-together or spread-apart model set by the placement_group_type. This makes the group non-compliant (is_compliant: false). You need to wait for Akamai to move the offending compute instance to make it compliant again, once the necessary capacity is available in the region. Offers flexibility to add future compute instances if compliance isn't an immediate concern.

📘

In rare cases, non-compliance can occur with a strict placement group if Akamai needs to failover or migrate your compute instances for maintenance. Fixing non-compliance for a strict placement group is prioritized over a flexible group.

strict flexible

string

Filterable, Read-only How compute instances are distributed in your placement group. A placement_group_type using anti-affinity (anti-affinity:local) places compute instances in separate hosts, but still in the same region. This best supports the spread-apart model for high availability. A placement_group_type using affinity places compute instances physically close together, possibly on the same host. This supports the grouped-together model for low-latency.

📘

Currently, only anti_affinity:local is available for placement_group_type.

anti_affinity:local

string

Filterable, Read-only The region where the placement group was deployed.

Language
Credentials