Get startedSign in

Setting Up Ingress on a Cluster

Setting up your edge networking on a cluster, and learn a bit about GlobalServices and Add-Ons

Setting the stage

A very common problem a user will need to solve when setting up a new cluster is getting edge networking set up. This includes solving a few main concerns:

  • Ingress - this sets up an ingress controller for load balancing incoming HTTP requests into the microservices on your cluster
  • DNS registration - you can use externaldns to automate listening for new hostname registrations in ingress resources and registering them with standard DNS services like route53
  • SSL Cert Management - the standard K8s approach to this is using Cert Manager.

plural up gets you 90% of the way there out of the box, you'll just need to configure a few basic things. We provide a consolidated runtime chart that makes installing these in one swoop much easier, but you can also mix-and-match from the CNCF ecosystem as well based on your organizations requirements and preferences.

The tooling you'll use here should also generalize to any other common runtime add-ons you might need to apply, which are all optimally managed via global services and if the templating is done well, will require a very small set of files to maintain. Some of these concerns can be:

  • setting up datadog-agent in all your clusters
  • setting up istio/linkerd service meshes in your clusters
  • setting up security tooling like trivy or kubescape in your cluster
  • setting up cost management tooling like kubecost

Setting Up The Runtime Chart

We're going to use our runtime chart for now, but the technique can generalize to any other helm chart as well, so if you want to mix and match, feel free to simply use this as inspiration.

First, let's create a global service for the runtime chart. This will ensure it's installed on all clusters with a common tagset. Writing this to bootstrap/components/runtime.yaml

Info:

The global services will all be written to a subfolder of bootstrap. This is because plural up initializes a bootstrap service-of-services under that folder, so we can guarantee any file written there will be synced. Sets of configuration that should be deployed independently and not to the mgmt cluster ought to live in their own folder structure, which we typically put under services/**.

Changes will not be applied until they are pushed or merged to your main branch that the root apps service is listening to.

yaml
apiVersion: deployments.plural.sh/v1alpha1
kind: GlobalService
metadata:
  name: plrl-runtime
  namespace: infra
spec:
  tags:
    role: workload
  template:
    name: runtime
    namespace: plural-runtime # note this for later
    git:
      ref: main
      folder: helm
    repositoryRef:
      name: infra # this should point to your `plural up` repo
      namespace: infra
    helm:
      version: x.x.x
      chart: runtime
      url: https://pluralsh.github.io/bootstrap
      valuesFiles:
      - runtime.yaml.liquid

Notice this is expecting a helm/runtime.yaml.liquid file. This would look something like:

yaml
plural-certmanager-webhook:
  enabled: false

ownerEmail: <your-email>

external-dns:
  enabled: true

  logLevel: debug

  provider: aws

  txtOwnerId: plrl-{{ cluster.handle }} # templating in the cluster handle, which is unique, to be the externaldns owner id

  policy: sync
  
  domainFilters:
  - {{ cluster.metadata.dns_zone }} # check terraform/modules/clusters/aws/plural.tf for where this is set

  serviceAccount:
    annotations:
      eks.amazonaws.com/role-arn: {{ cluster.metadata.iam.external_dns }} # check terraform/modules/clusters/aws/plural.tf for where this is set
  
  {% if cluster.distro == "EKS" %}
  ingress-nginx:
    config:
      compute-full-forwarded-for: 'true'
      use-forwarded-headers: 'true'
      use-proxy-protocol: 'true'
  ingress-nginx-private:
    config:
      compute-full-forwarded-for: 'true'
      use-forwarded-headers: 'true'
      use-proxy-protocol: 'true'
  {% endif %}