Skip to main content

Deployment

HCC Kubernetes (K8s)

HCC K8s Namespace

Client Domain will be leveraging the existing capapi-dev HCC K8s namespace for deploying to the HCC K8s development environment.

Before accessing and deploying to that namespace, various tools need to be installed to access the namespace, as outlined in the HCP HCC Kubernetes Getting Started docs.

A kubeconfig file also needs to be created. Please email Chee Wan Woo for a script (setup-kubeconfig.sh) that can be used to create the kubeconfig for access to the various environments (dev, test, stage, and prod).

Once the script has been saved, execute the following commands in a shell:

hcpctl context create --uri https://prm.optum.com prm-prod

hcpctl context use prm-prod

hcpctl login -m username

# for access to the dev namespace

./setup-kubeconfig.sh capapi-dev/naas-v1/ctcnonprdusr001-capapi-dev

kubectl config use-context ctcnonprdusr001_capapi-dev_capapi-dev-hcc-naas-admin

# for access to the test namespace

./setup-kubeconfig.sh capapi-tst/naas-v1/ctcnonprdusr001-capapi-tst

kubectl config use-context ctcnonprdusr001_capapi-tst_capapi-tst-hcc-naas-admin

# for access to the stage namespace

./setup-kubeconfig.sh capapi-dev/naas-v1/ctcnonprdusr001-capapi-stg-ctc
./setup-kubeconfig.sh capapi-dev/naas-v1/elrnonprdusr001-capapi-stg-elr

kubectl config use-context ctcnonprdusr001_capapi-stg-ctc_capapi-stg-ctc-hcc-naas-admin
kubectl config use-context elrnonprdusr001_capapi-stg-elr_capapi-stg-elr-hcc-naas-admin

# for access to the prod namespace

./setup-kubeconfig.sh uhgrg-17d5cc52-90f6-4e4c-8e9f-907bf52e0c45-ns/naas-v1/faro-elr-prd
./setup-kubeconfig.sh uhgrg-17d5cc52-90f6-4e4c-8e9f-907bf52e0c45-ns/naas-v1/faro-ctc-prd

kubectl config use-context elrprdusr001_capapi-prd-elr_capapi-prd-elr-hcc-naas-admin
kubectl config use-context ctcprdusr001_capapi-prd-ctc_capapi-prd-ctc-hcc-naas-admin

Verify that you can access the cluster by running a kubectl command, e.g. kubectl get pods

JFrog Artifactory Namespace

In addition to an HCC K8s namespace, a JFrog Artifactory namespace needs to be created to house the Docker images that will be referenced from within the HCC K8s Deployment manifests. The HCP Console can be used to create an Artifactory namespace.

K8s Manifest Files

Once the namespace has been created and the tools have been installed, K8s manifest files can be created and updated to create the required Kubernetes objects using the normal kubectl apply -f <manifest.yaml> command.

There are a few fields that the deployment manifest files must contain at the container level. These include the following:

  • resources
  • livenessProbe
  • readinessProbe
  • securityContext

Here's an example deployment definition defining the values for those fields:

apiVersion: apps/v1
kind: Deployment
metadata:
name: prism-client-domain
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 5
selector:
matchLabels:
app: client-domain-service-dev
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: client-domain-service-dev
spec:
containers:
- env:
- name: JAVA_OPTS_APPEND
value: -Dspring.profiles.active=dev -Duser.timezone=America/Chicago -Djasypt.encryptor.password=GetRidOfStuff
-Djavax.net.ssl.trustStore=/microservice/cacerts -Djavax.net.ssl.trustStorePassword=changeit
-Xms128m -Xmx768m -XX:HeapDumpPath=/log -XX:+HeapDumpOnOutOfMemoryError
-Djava.net.preferIPv4Stack=true -Dserver.port=8086 -Ddb_credential=faroapp
- name: SPRING_PROFILES_ACTIVE
value: dev
- name: SPLUNK
value: "NO"
- name: WITH_DYNATRACE
value: "0"
- name: CONTAINER_HEAP_PERCENT
value: "0.75"
- name: JAVA_MAX_MEM_RATIO
value: "75"
- name: FARO_USERNAME
value: faroapp
- name: FARO_PASSWORD
valueFrom:
secretKeyRef:
key: faroapp
name: faroapp
image: docker.repo1.uhc.com/client-domain/client-domain:latest
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /actuator/health/liveness
port: 8086
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 10
name: client-domain-service-dev
ports:
- containerPort: 8086
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /actuator/health/readiness
port: 8086
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 10
resources:
limits:
cpu: 500m
memory: 1000Mi
requests:
cpu: 60m
memory: 128Mi
securityContext:
capabilities:
drop:
- KILL
- MKNOD
- SYS_CHROOT
runAsUser: 1000
terminationMessagePath: /log/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /log
name: client-domain-service-dev-pv
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: client-domain-service-dev-pv
persistentVolumeClaim:
claimName: client-domain-service

Ingress

There's an existing Ingress object in the capapi-dev namespace that is responsible for forwarding external traffic to thedev-api-faro.optum.com F5 Load Balancer (see here for the Load Balancer details). As a side note, the Load Balancer is where the SSL certificate is defined.

The Ingress object looks like this:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"nginx.ingress.kubernetes.io/proxy-body-size":"18m","nginx.ingress.kubernetes.io/proxy-connect-timeout":"120","nginx.ingress.kubernetes.io/proxy-read-timeout":"360","nginx.ingress.kubernetes.io/proxy-send-timeout":"360","nginx.ingress.kubernetes.io/rewrite-target":"/","nginx.ingress.kubernetes.io/ssl-redirect":"false"},"labels":{"app":"dev-core-web"},"name":"dev-core-web-ingress","namespace":"capapi-dev"},"spec":{"ingressClassName":"nginx","rules":[{"host":"dev-api-faro.optum.com","http":{"paths":[{"backend":{"service":{"name":"dev-core-web","port":{"number":2088}}},"path":"/","pathType":"Prefix"}]}}]},"status":{"loadBalancer":{"ingress":[{"ip":"10.223.169.14"}]}}}
nginx.ingress.kubernetes.io/proxy-body-size: 18m
nginx.ingress.kubernetes.io/proxy-connect-timeout: "120"
nginx.ingress.kubernetes.io/proxy-read-timeout: "360"
nginx.ingress.kubernetes.io/proxy-send-timeout: "360"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "false"
labels:
app: dev-core-web
name: dev-core-web-ingress
namespace: capapi-dev
spec:
ingressClassName: nginx
rules:
- host: dev-api-faro.optum.com
http:
paths:
- backend:
service:
name: dev-core-web
port:
number: 2088
path: /
pathType: Prefix

Proxy Creation

Application proxies are created using a Config Map (nginx-dev-conf in the capapi-dev namespace) and can be found in GitHub here.

The proxy is used to determine how the Ingress knows what application to forward traffic to. It does that by looking at the location field. It's also used to define the HTTP headers that will be used per the requests.

The Client Domain proxy config looks like this:

#client-domain-service-dev 
#https://dev-api-faro.optum.com/client/373577
location /client
{
satisfy any;
if (-f /opt/app-root/cache/offline.html)
{
return 503;
}

# Security headers.
add_header 'X-Frame-Options' 'SAMEORIGIN' always;
add_header 'X-Content-Type-Options' 'nosniff' always;
add_header 'X-XSS-Protection' '1; mode=block' always;

# Opt in to the future.
add_header 'X-UA-Compatible' 'IE=Edge' always;

# Always use transport security (HSTS).
add_header 'Strict-Transport-Security' 'max-age=31536000' always;

# Content Security Policy
# http://www.w3.org/TR/CSP/
add_header 'Content-Security-Policy' "default-src 'self' https:; font-src https: data:; img-src blob: https: data:; style-src 'self' https: 'unsafe-inline'; script-src 'self' https: 'unsafe-inline' 'unsafe-eval'; upgrade-insecure-requests; block-all-mixed-content; reflected-xss block; referrer no-referrer-when-downgrade" always;

# Don't cache anything we're serving from Mule (mostly HTML)
add_header 'Cache-Control' 'no-cache, no-store, must-revalidate, private' always;

# last force to reload
add_header 'Pragma' 'no-cache' always;

# Talk to faro via HTTP 1.1.
proxy_http_version 1.1;
proxy_set_header 'Connection' '';

# Proxy headers and don't cache responses.
proxy_set_header 'Host' $proxy_host;
#proxy_cache off;
proxy_pass http://client-domain-service;
}