A network policy is a specification of how groups of pods are allowed to communicate with each other and other network endpoints.
NetworkPolicy
resources use labels to select pods and define rules which specify what traffic is allowed to the selected pods.
NetworkPolicy
Resourceto
and from
selectorsNetwork policies are implemented by the network plugin, so you must be using a networking solution which supports NetworkPolicy
- simply creating the resource without a controller to implement it will have no effect.
By default, pods are non-isolated; they accept traffic from any source.
Pods become isolated by having a NetworkPolicy that selects them. Once there is any NetworkPolicy in a namespace selecting a particular pod, that pod will reject any connections that are not allowed by any NetworkPolicy. (Other pods in the namespace that are not selected by any NetworkPolicy will continue to accept all traffic.)
Network policies do not conflict, they are additive. If any policy or policies select a pod, the pod is restricted to what is allowed by the union of those policies’ ingress/egress rules. Thus, order of evaluation does not affect the policy result.
NetworkPolicy
ResourceSee the NetworkPolicy for a full definition of the resource.
An example NetworkPolicy
might look like this:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
POSTing this to the API server will have no effect unless your chosen networking solution supports network policy.
Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy
needs apiVersion
, kind
, and metadata
fields. For general information
about working with config files, see
Configure Containers Using a ConfigMap,
and Object Management.
spec: NetworkPolicy
spec has all the information needed to define a particular network policy in the given namespace.
podSelector: Each NetworkPolicy
includes a podSelector
which selects the grouping of pods to which the policy applies. The example policy selects pods with the label “role=db”. An empty podSelector
selects all pods in the namespace.
policyTypes: Each NetworkPolicy
includes a policyTypes
list which may include either Ingress
, Egress
, or both. The policyTypes
field indicates whether or not the given policy applies to ingress traffic to selected pod, egress traffic from selected pods, or both. If no policyTypes
are specified on a NetworkPolicy then by default Ingress
will always be set and Egress
will be set if the NetworkPolicy has any egress rules.
ingress: Each NetworkPolicy
may include a list of whitelist ingress
rules. Each rule allows traffic which matches both the from
and ports
sections. The example policy contains a single rule, which matches traffic on a single port, from one of three sources, the first specified via an ipBlock
, the second via a namespaceSelector
and the third via a podSelector
.
egress: Each NetworkPolicy
may include a list of whitelist egress
rules. Each rule allows traffic which matches both the to
and ports
sections. The example policy contains a single rule, which matches traffic on a single port to any destination in 10.0.0.0/24
.
So, the example NetworkPolicy:
(Ingress rules) allows connections to all pods in the “default” namespace with the label “role=db” on TCP port 6379 from:
(Egress rules) allows connections from any pod in the “default” namespace with the label “role=db” to CIDR 10.0.0.0/24 on TCP port 5978
See the Declare Network Policy walkthrough for further examples.
to
and from
selectorsThere are four kinds of selectors that can be specified in an ingress
from
section or egress
to
section:
podSelector: This selects particular Pods in the same namespace as the NetworkPolicy
which should be allowed as ingress sources or egress destinations.
namespaceSelector: This selects particular namespaces for which all Pods should be allowed as ingress sources or egress destinations.
namespaceSelector and podSelector: A single to
/from
entry that specifies both namespaceSelector
and podSelector
selects particular Pods within particular namespaces. Be careful to use correct YAML syntax; this policy:
...
ingress:
- from:
- namespaceSelector:
matchLabels:
user: alice
podSelector:
matchLabels:
role: client
...
contains a single from
element allowing connections from Pods with the label role=client
in namespaces with the label user=alice
. But this policy:
...
ingress:
- from:
- namespaceSelector:
matchLabels:
user: alice
- podSelector:
matchLabels:
role: client
...
contains two elements in the from
array, and allows connections from Pods in the local Namespace with the label role=client
, or from any Pod in any namespace with the label user=alice
.
When in doubt, use kubectl describe
to see how Kubernetes has interpreted the policy.
ipBlock: This selects particular IP CIDR ranges to allow as ingress sources or egress destinations. These should be cluster-external IPs, since Pod IPs are ephemeral and unpredictable.
Cluster ingress and egress mechanisms often require rewriting the source or destination IP
of packets. In cases where this happens, it is not defined whether this happens before or
after NetworkPolicy processing, and the behavior may be different for different
combinations of network plugin, cloud provider, Service
implementation, etc.
In the case of ingress, this means that in some cases you may be able to filter incoming
packets based on the actual original source IP, while in other cases, the “source IP” that
the NetworkPolicy acts on may be the IP of a LoadBalancer
or of the Pod’s node, etc.
For egress, this means that connections from pods to Service
IPs that get rewritten to
cluster-external IPs may or may not be subject to ipBlock
-based policies.
By default, if no policies exist in a namespace, then all ingress and egress traffic is allowed to and from pods in that namespace. The following examples let you change the default behavior in that namespace.
You can create a “default” isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not allow any ingress traffic to those pods.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
spec:
podSelector: {}
policyTypes:
- Ingress
This ensures that even pods that aren’t selected by any other NetworkPolicy will still be isolated. This policy does not change the default egress isolation behavior.
If you want to allow all traffic to all pods in a namespace (even if policies are added that cause some pods to be treated as “isolated”), you can create a policy that explicitly allows all traffic in that namespace.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all
spec:
podSelector: {}
ingress:
- {}
policyTypes:
- Ingress
You can create a “default” egress isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not allow any egress traffic from those pods.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
spec:
podSelector: {}
policyTypes:
- Egress
This ensures that even pods that aren’t selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the default ingress isolation behavior.
If you want to allow all traffic from all pods in a namespace (even if policies are added that cause some pods to be treated as “isolated”), you can create a policy that explicitly allows all egress traffic in that namespace.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all
spec:
podSelector: {}
egress:
- {}
policyTypes:
- Egress
You can create a “default” policy for a namespace which prevents all ingress AND egress traffic by creating the following NetworkPolicy in that namespace.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
This ensures that even pods that aren’t selected by any other NetworkPolicy will not be allowed ingress or egress traffic.
Kubernetes v1.12
alphaKubernetes supports SCTP as a protocol
value in NetworkPolicy
definitions as an alpha feature. To enable this feature, the cluster administrator needs to enable the SCTPSupport
feature gate on the apiserver, for example, “--feature-gates=SCTPSupport=true,...”
. When the feature gate is enabled, users can set the protocol
field of a NetworkPolicy
to SCTP
. Kubernetes sets up the network accordingly for the SCTP associations, just like it does for TCP connections.
The CNI plugin has to support SCTP as protocol
value in NetworkPolicy
.
Was this page helpful?
Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub repo if you want to report a problem or suggest an improvement.