<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.1.1">Jekyll</generator><link href="https://inlets.dev/feed.xml" rel="self" type="application/atom+xml" /><link href="https://inlets.dev/" rel="alternate" type="text/html" /><updated>2026-04-11T11:13:53+00:00</updated><id>https://inlets.dev/feed.xml</id><title type="html">inlets Pro</title><subtitle>The Cloud Native Tunnel</subtitle><entry><title type="html">Managed HTTPS tunnels in one-click with inlets cloud</title><link href="https://inlets.dev/blog/tutorial/2025/04/01/one-click-hosted-http-tunnels.html" rel="alternate" type="text/html" title="Managed HTTPS tunnels in one-click with inlets cloud" /><published>2025-04-01T00:00:00+00:00</published><updated>2025-04-01T00:00:00+00:00</updated><id>https://inlets.dev/blog/tutorial/2025/04/01/one-click-hosted-http-tunnels</id><content type="html" xml:base="https://inlets.dev/blog/tutorial/2025/04/01/one-click-hosted-http-tunnels.html">&lt;p&gt;Imagine if you could expose a local HTTP service, without TLS enabled to the public Internet with a HTTPS certificate with just one click.&lt;/p&gt;

&lt;p&gt;This is now possible with inlets cloud, our hosted tunnel service which is live in Europe, US East and Asia, and free to use for all inlets subscribers whilst in beta.&lt;/p&gt;

&lt;p&gt;We’ll start off by looking at the one-click, automatic option, then look at how we can use our own custom domain or even a custom Reverse proxy like Caddy, Nginx, or Traefik. I’ll also throw in some bonus material on how to expose SSH, the Kubernetes API, and an advanced option for self-hosting your own tunnel server.&lt;/p&gt;

&lt;p&gt;For help and support, you can join our Discord server from the link in the inlets cloud dashboard, or use the &lt;a href=&quot;https://inlets.dev/contact&quot;&gt;contact page&lt;/a&gt; to get in touch.&lt;/p&gt;

&lt;h2 id=&quot;three-options-for-your-tunnels&quot;&gt;Three options for your tunnels&lt;/h2&gt;

&lt;p&gt;We’ll focus on HTTP traffic for this post - think of a draft blog post, an API you’re working on, a webhook receiver, something in your homelab like Grafana, Wordpress, or perhaps an S3 endpoint like Minio that you can use to perform backups over the Internet to your NAS.&lt;/p&gt;

&lt;p&gt;Let’s look at each of the three options rated from the 1-click experience (easiest) all the way down to running your own Nginx server, Caddy server, or Kubernetes Ingress controller (most flexible).&lt;/p&gt;

&lt;h3 id=&quot;1-one-click-http-to-https---with-our-try-inletsdev-domain&quot;&gt;1. One-click HTTP to HTTPS - with our try-inlets.dev domain&lt;/h3&gt;

&lt;p&gt;You have a HTTP endpoint on your machine, with no TLS enabled. You can now expose it to the public Internet with a single click using HTTPS, under our domain &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;try-inlets.dev&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Create a tunnel giving it a descriptive name like “Wordpress”, “Next”, or “Grafana”, etc.&lt;/p&gt;

&lt;p&gt;Click the “HTTP endpoint (we will terminate TLS for you)” option.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025-04-one-click-tunnels/one-click-tunnel.png&quot; alt=&quot;Create a one-click tunnel&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Then make sure the “Generate domain” is toggled on, this will generate a random and fun domain name for you like &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;prickly-hedgehog.try-inlets.dev&lt;/code&gt; or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;happy-platypus.try-inlets.dev&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Create the tunnel, then scroll down to “Connect” and pick from CLI, systemd or Kubernetes YAML&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025-04-one-click-tunnels/one-click-copy.png&quot; alt=&quot;Connect to the tunnel&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Click the Copy icon and then paste the CLI command in on your local machine.&lt;/p&gt;

&lt;p&gt;Change the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--upstream&lt;/code&gt; flag to the HTTP endpoint on your local machine, or on a machine reachable on your local network.&lt;/p&gt;

&lt;p&gt;For Grafana, that is likely going to be: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;http://127.0.0.1:3000&lt;/code&gt;, but if that were on your Raspberry Pi, it could be: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;http://192.168.0.12:3000&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You’ll then be able to access your service at &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://prickly-hedgehog.try-inlets.dev&lt;/code&gt; or whatever name you chose.&lt;/p&gt;

&lt;p&gt;I recorded a quick video walk-through to show you just how quick and easy this approach can be:&lt;/p&gt;

&lt;div style=&quot;width: ; margin:0 auto;&quot;&gt;
    
    &lt;div class=&quot;ytcontainer&quot;&gt;
        &lt;iframe width=&quot;560&quot; height=&quot;315&quot; class=&quot;yt&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot; allowfullscreen=&quot;&quot; src=&quot;https://www.youtube.com/embed/oZ_Pph-Go2U&quot;&gt;&lt;/iframe&gt;
    &lt;/div&gt;
&lt;/div&gt;

&lt;h3 id=&quot;2-http-to-https-with-your-own-custom-domain&quot;&gt;2. HTTP to HTTPS with your own custom domain&lt;/h3&gt;

&lt;p&gt;First of all, create a new domain and verify it by creating a TXT record in your DNS provider. If you don’t have a domain yet, we’d recommend trying out Cloudflare or Namecheap, both of which are easy to set up and have a free tier.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025-04-one-click-tunnels/add-domain.png&quot; alt=&quot;Add a domain&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The UI will show you how to verify your own domain, and confirm that it is working.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025-04-one-click-tunnels/verify-domain.png&quot; alt=&quot;Verify the domain&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Next, create a tunnel again, but this time make sure the toggle for “Generate name” is off.&lt;/p&gt;

&lt;p&gt;Enter each of the sub-domains you’d like to use, and then again scroll down to “Connect” and pick from CLI, systemd or Kubernetes YAML&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025-04-one-click-tunnels/two-custom-domains-terminated.png&quot; alt=&quot;Two custom domains - terminated in inlets-cloud&quot; /&gt;&lt;/p&gt;

&lt;p&gt;I’ve added both: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;openfaas.selfactuated.dev&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;fileshare.selfactuated.dev&lt;/code&gt; as an example.&lt;/p&gt;

&lt;p&gt;If those services were both running on my machine on port 8080 and 8000 respectively, then I’d change the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--upstream&lt;/code&gt; flags as follows:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nt&quot;&gt;--upstream&lt;/span&gt; openfaas.selfactuated.dev&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;http://127.0.0.1:8080 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
&lt;span class=&quot;nt&quot;&gt;--upstream&lt;/span&gt; fileshare.selfactuated.dev&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;http://127.0.0.1:8000
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Once again, you can then run the client on your machine and expose the services to the public Internet.&lt;/p&gt;

&lt;p&gt;Run the CLI command for the client, and then you’ll then be able to access your service at &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://grafana.exmaple.com&lt;/code&gt; or whatever name you chose.&lt;/p&gt;

&lt;h3 id=&quot;3-https-termination---bring-your-own-domain&quot;&gt;3. HTTPS termination - bring your own domain&lt;/h3&gt;

&lt;p&gt;This final option is the most versatile, but is also more involved than the first two.&lt;/p&gt;

&lt;p&gt;Instead of having inlets-cloud terminate TLS and obtain certificates for you, you will run your own Reverse proxy or Kubernetes Ingress Controller on your machine or cluster.&lt;/p&gt;

&lt;p&gt;You’ll need to create a domain and verify it before moving forward. If you already have one verified, you can use it again for the new sub-domains you want to expose.&lt;/p&gt;

&lt;p&gt;Create a tunnel and enter the sub-domains you want to expose, but this time pick “Ingress (Reverse proxy, Kubernetes Ingress, Istio, SSH)” as the type of tunnel.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025-04-one-click-tunnels/tls-terminated.png&quot; alt=&quot;Two custom domains - terminated on your network&quot; /&gt;&lt;/p&gt;

&lt;p&gt;I’ve added both: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;openfaas.selfactuated.dev&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;fileshare.selfactuated.dev&lt;/code&gt; as an example.&lt;/p&gt;

&lt;p&gt;Rather than having the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--upstream&lt;/code&gt; flags point directly to the plaintext HTTP service, we have the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--upstream&lt;/code&gt; pointing to our Reverse proxy or Ingress controller.&lt;/p&gt;

&lt;p&gt;If you were exposing Caddy for instance, then you would then need to create a Caddyfile so it knows to answer the ACME challenges from Let’s Encrypt, and how to proxy the traffic to your local services.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-caddy&quot;&gt;openfaas.selfactuated.dev {
  reverse_proxy localhost:8080
}

fileshare.selfactuated.dev {
  reverse_proxy localhost:8000
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025-04-one-click-tunnels/reverse-proxy.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;For Kubernetes, the process is very similar, but you use a Kubernetes Ingress resource for each of the sub-domains you want to expose, and have the tunnel point to the Ingress controller.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025-04-one-click-tunnels/custom-k8s.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;wrapping-up&quot;&gt;Wrapping up&lt;/h2&gt;

&lt;p&gt;In this post we looked at three options for exposing HTTP services to the public Internet with a single click. We used inlets cloud, which is a managed service that’s free to all inlets subscribers during beta.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;We started off with the one-click option, which is the easiest and requires the least configuration. That is instant, and gives you a HTTPS endpoint on our &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;try-inlets.dev&lt;/code&gt; domain.&lt;/li&gt;
  &lt;li&gt;The second option was to use your own custom domain, but still have inlets cloud terminate TLS for you. Just verify a domain and you’re good to go.&lt;/li&gt;
  &lt;li&gt;The final option is the most flexible, and allows you to bring your own domain and run your own Reverse proxy or Ingress controller.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The tunnel client can be run directly on your machine with a CLI command, set up as a systemd service, or deployed to a Kubernetes cluster using a YAML file copied from the “Connect” section of the tunnel details.&lt;/p&gt;

&lt;p&gt;You can &lt;a href=&quot;https://cloud.inlets.dev/register&quot;&gt;register for access to inlets cloud&lt;/a&gt;. Just make sure you use the same email from your inlets subscription, and we’ll get you approved for access quickly.&lt;/p&gt;

&lt;p&gt;If you have any questions don’t hesitate to &lt;a href=&quot;https://inlets.dev/contact&quot;&gt;reach out&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&quot;inlets-cloud-can-also-expose-ssh-and-the-kubernetes-api-server&quot;&gt;Inlets Cloud can also expose SSH and the Kubernetes API server&lt;/h3&gt;

&lt;p&gt;Inlets Cloud can also be used along with the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;inlets-pro snimux&lt;/code&gt; command &lt;a href=&quot;https://inlets.dev/blog/tutorial/2024/10/17/ssh-with-inlets-cloud.html&quot;&gt;to expose the SSH&lt;/a&gt; to as many local servers and Raspberry Pis as you like.&lt;/p&gt;

&lt;p&gt;If you have a K3s cluster at home, or in your lab, you can &lt;a href=&quot;https://inlets.dev/blog/2024/02/09/the-homelab-tunnel-you-need.html&quot;&gt;tunnel out the Kubernetes API server&lt;/a&gt; so you can run &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl&lt;/code&gt; from literally anywhere with an Internet connection.&lt;/p&gt;

&lt;h3 id=&quot;did-you-know-you-can-also-self-host-tunnel-servers&quot;&gt;Did you know? You can also self-host tunnel servers&lt;/h3&gt;

&lt;p&gt;Inlets Cloud is a very convenient way to set up tunnel servers instantly, with as little as one click, but for maximum flexibility and control, you can also self-host the tunnel server.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://docs.inlets.dev/tutorial/manual-http-server/&quot;&gt;Set up a manual HTTPS tunnel server&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://docs.inlets.dev/tutorial/automated-http-server/&quot;&gt;Automate a HTTPS tunnel server with inletsctl&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://docs.inlets.dev/tutorial/kubernetes-tcp-loadbalancer/&quot;&gt;Automate Kubernetes Load Balancers with inlets-operator&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content><author><name>Alex Ellis</name></author><category term="tutorial" /><summary type="html">Imagine if you could expose a local HTTP service, without TLS enabled to the public Internet with a HTTPS certificate with just one click.</summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://inlets.dev/images/2025-04-one-click-tunnels/background.png" /><media:content medium="image" url="https://inlets.dev/images/2025-04-one-click-tunnels/background.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">How to authenticate your HTTP tunnels with inlets and OAuth.</title><link href="https://inlets.dev/blog/tutorial/2025/03/10/secure-http-tunnels-with-oauth.html" rel="alternate" type="text/html" title="How to authenticate your HTTP tunnels with inlets and OAuth." /><published>2025-03-10T00:00:00+00:00</published><updated>2025-03-10T00:00:00+00:00</updated><id>https://inlets.dev/blog/tutorial/2025/03/10/secure-http-tunnels-with-oauth</id><content type="html" xml:base="https://inlets.dev/blog/tutorial/2025/03/10/secure-http-tunnels-with-oauth.html">&lt;p&gt;In this tutorial you will learn how to secure your tunnelled HTTP services using the Inlets built-in HTTP authentication.&lt;/p&gt;

&lt;p&gt;While inlets allows you to quickly expose any HTTP application to the public internet, you may not want everyone to be able to access it. Inlets can quickly add authentication to your application without any changes.&lt;/p&gt;

&lt;p&gt;At the moment of writing Inlets support three forms of authentication:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;OAuth&lt;/li&gt;
  &lt;li&gt;Basic authentication&lt;/li&gt;
  &lt;li&gt;Bearer token authentication&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We will be showing you how to configure each of these authentication methods.&lt;/p&gt;

&lt;h2 id=&quot;when-would-you-want-to-secure-your-tunnel&quot;&gt;When would you want to secure your tunnel?&lt;/h2&gt;

&lt;p&gt;If you’re exposing an application for production, you may want to build authentication directly into your application.&lt;/p&gt;

&lt;p&gt;However, during development and whilst collaborating with others, you may want to restrict access to a limited audience.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;You’re working on a blog post draft, but only want your team mates to view it due to an embargo or because it contains confidential information.&lt;/li&gt;
  &lt;li&gt;You’re iterating on an API, and a reverse proxy usually provides authentication. It doesn’t make sense to run that locally, but you still need to restrict access.&lt;/li&gt;
  &lt;li&gt;When someone’s helping you with remote support. You want to expose your router’s admin interface, but need to restrict access to certain people.&lt;/li&gt;
  &lt;li&gt;You’re running internal development tools like Grafana dashboards or staging environments that shouldn’t be publicly accessible&lt;/li&gt;
  &lt;li&gt;You need to demo work to clients or stakeholders, but want to ensure only they can access it&lt;/li&gt;
  &lt;li&gt;You’re exposing temporary debugging endpoints or admin interfaces that need to be secured&lt;/li&gt;
  &lt;li&gt;You’re sharing access to local development environments like an IDE with remote team members but need to maintain security&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In addition to all of the above, you can also restrict access by IP address using an &lt;a href=&quot;https://docs.inlets.dev/tutorial/ip-filtering/&quot;&gt;IP allow list&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;prerequisites&quot;&gt;Prerequisites&lt;/h2&gt;

&lt;p&gt;We assume you have an Inlets HTTP tunnel server deployed. If you don’t have a tunnel yet follow our docs to &lt;a href=&quot;https://docs.inlets.dev/tutorial/automated-http-server/&quot;&gt;create a new HTTP tunnel server&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For the tunnel client make sure you have the &lt;a href=&quot;https://github.com/inlets/inlets-pro/releases&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;inlets-pro&lt;/code&gt; binary&lt;/a&gt;, version 0.10.0 or higher installed. Earlier versions do not support authentication.&lt;/p&gt;

&lt;h2 id=&quot;connect-the-tunnel-client&quot;&gt;Connect the tunnel client&lt;/h2&gt;

&lt;p&gt;In this example we will be exposing &lt;a href=&quot;https://prometheus.io/&quot;&gt;Prometheus&lt;/a&gt;, which is a popular open source tool for monitoring and alerting. We chose it because it has a web interface, and an API exposed over the same port. It does have some of its own built-in options for authentication, but when we use it with inlets, we can bypass authentication for our own local use, and only enforce it for remote users.&lt;/p&gt;

&lt;p&gt;Expose the Prometheus upstream without any authentication enabled:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;inlets-pro http client &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--url&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;wss://157.180.37.179:8123&quot;&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--token-file&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;./token&quot;&lt;/span&gt;  &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--upstream&lt;/span&gt; prometheus.demo.welteki.dev&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;http://127.0.0.1:9090
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Authentication for tunnels is configured through flags when connecting to the tunnel server. In the next paragraphs we will be going through the configuration for different authentication methods. The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--url&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--token-file&lt;/code&gt; flags will be left out of the commands for brevity but should be provided when connecting to your own tunnel.&lt;/p&gt;

&lt;h2 id=&quot;oauth-with-github&quot;&gt;OAuth with GitHub&lt;/h2&gt;

&lt;p&gt;If you want to avoid managing and distributing credentials for your application or need fine grained control over who can access the app you can use OAuth to protect tunneled applications.&lt;/p&gt;

&lt;p&gt;In this tutorial we will be setting up OAuth with GitHub so that users can login with their GitHub account to access the tunnel.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Follow the &lt;a href=&quot;https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/creating-an-oauth-app&quot;&gt;GitHub documentation&lt;/a&gt; to create a new OAuth app for your tunnel.&lt;/li&gt;
  &lt;li&gt;Set the Authorization callback URL. In this example we are using the domain &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://prometheus.demo.welteki.dev&lt;/code&gt; to expose the tunnel. The authorization callback for the tunnel will be &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://prometheus.demo.welteki.dev/_/oauth/callback&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025-03-authenticate-http-tunnels/github-oauth-app.png&quot; alt=&quot;GitHub OAuth app configuration&quot; /&gt;&lt;/p&gt;
&lt;blockquote&gt;
  &lt;p&gt;Example GitHub OAuth app configuration&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once you complete the registration of your OAuth app you will get a client id and secret. Save these in a convenient location. Both values need to be provided through the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--oauth-client-id&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--oauth-client-secret&lt;/code&gt; to start the tunnel client with OAuth enabled.&lt;/p&gt;

&lt;div class=&quot;language-diff highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;p&quot;&gt;inlets-pro http client \
&lt;/span&gt;  --upstream prometheus.demo.welteki.dev=http://127.0.0.1:9090 \
&lt;span class=&quot;gi&quot;&gt;+ --oauth-provider github \
+ --oauth-client-id $(cat ./oauth-client-id) \
+ --oauth-client-secret $(cat ./oauth-client-secret) \
+ --oauth-acl welteki
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;oauth-acl&lt;/code&gt; flag is used to provide a list of users that are allowed to access the application. In case of the GitHub provider the ACL value can either be a GitHub username or email.&lt;/p&gt;

&lt;p&gt;When trying to access the URl of the tunnel service, users will be asked to login with the configured provider, in this case GitHub, before they are able to access the application.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025-03-authenticate-http-tunnels/github-oauth-login.png&quot; alt=&quot;OAuth login page for authenticated tunnels&quot; /&gt;&lt;/p&gt;
&lt;blockquote&gt;
  &lt;p&gt;Login page for tunnels with GitHub OAuth enabled.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2 id=&quot;basic-authentication&quot;&gt;Basic authentication&lt;/h2&gt;

&lt;p&gt;The simplest form of authentication supported by Inlets is basic authentication. Enabling basic authentication on the tunnel will protect the HTTP service with a username and password.&lt;/p&gt;

&lt;p&gt;When a user visits the URL of the tunneled service they will be prompted for a username and password before they are able to access the application.&lt;/p&gt;

&lt;p&gt;Basic auth can be enabled for a tunnel by setting the basic auth flags when connecting the tunnel client.&lt;/p&gt;

&lt;div class=&quot;language-diff highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;p&quot;&gt;inlets-pro http client \
&lt;/span&gt;  --upstream prometheus.demo.welteki.dev=http://127.0.0.1:9090 \
&lt;span class=&quot;gi&quot;&gt;+ --basic-auth-username welteki \
+ --basic-auth-password $(cat ./basic-auth-password)
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--basic-auth-user&lt;/code&gt; flag is optional, when it is not provided the username will default to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;admin&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For example, the following allows access to full.name@example.com along with login1 and login2.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nt&quot;&gt;--oauth-acl&lt;/span&gt; full.name@example.com &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
&lt;span class=&quot;nt&quot;&gt;--oauth-acl&lt;/span&gt; login1 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
&lt;span class=&quot;nt&quot;&gt;--oauth-acl&lt;/span&gt; login2
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025-03-authenticate-http-tunnels/basic-auth.png&quot; alt=&quot;Tunnel endpoint protected with basic auth&quot; /&gt;&lt;/p&gt;
&lt;blockquote&gt;
  &lt;p&gt;Basic auth login for a tunnel endpoint&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2 id=&quot;token-authentication&quot;&gt;Token authentication&lt;/h2&gt;

&lt;p&gt;The OAuth flow requires a web-browser and human interaction to authenticate. If you are tunneling a service like a HTTP API that needs to be accessed by a headless client, e.g. a script, mobile app or other non-web clients like a backend API, where it is not possible to complete the OAuth flow you can use Bearer Token Authentication.&lt;/p&gt;

&lt;p&gt;In the case of our Prometheus server we have seen how the UI can be protected with basic auth or OAuth but the Prometheus server also exposes an HTTP API that needs to be protected but accessible by other services.&lt;/p&gt;

&lt;p&gt;Generate a random token and store it in a file. We will use openssl to generate the token:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;openssl rand &lt;span class=&quot;nt&quot;&gt;-base64&lt;/span&gt; 16 &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; ./bearer-token
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Start the Inlets client with the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--bearer-token&lt;/code&gt; flag to enable token authentications.&lt;/p&gt;

&lt;div class=&quot;language-diff highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;p&quot;&gt;inlets-pro http client \
&lt;/span&gt;  --upstream prometheus.demo.welteki.dev=http://127.0.0.1:9090 \
&lt;span class=&quot;gi&quot;&gt;+ --bearer-token $(cat ./bearer-token)
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Query the Prometheus API with curl and authenticate by adding the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Authorization&lt;/code&gt; header on the request.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;curl &lt;span class=&quot;s2&quot;&gt;&quot;https://prometheus.demo.welteki.dev/api/v1/labels&quot;&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Authorization: Bearer &lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; ~/.inlets/prometheus-tunnel/token&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The Bearer Token Authentication can be used together with both the basic auth and OAuth authentication. Just add the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--bearer-token&lt;/code&gt; along with the flags you would need to configure AOauth or basic authentication. This makes it possible to quickly add authentication to an application like Prometheus where you have a browser based UI and HTTP API.&lt;/p&gt;

&lt;div class=&quot;language-diff highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;p&quot;&gt;inlets-pro http client \
&lt;/span&gt;  --upstream prometheus.demo.welteki.dev=http://127.0.0.1:9090 \
  --oauth-provider github \
  --oauth-client-id $(cat ~/.inlets/prometheus-tunnel/oauth-client-id) \
  --oauth-client-secret $(cat ./oauth-client-secret) \
  --oauth-acl welteki \
&lt;span class=&quot;gi&quot;&gt;+ --bearer-token $(cat ./bearer-token)
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h1 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h1&gt;

&lt;p&gt;Inlets tunnels can be used to quickly add different types of authentication to your HTTP services without changing your applications. We showed how to configure and use the different authentication types supported by Inlets and discussed which one to pick for different use cases:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Use OAuth if you need to expose a UI or browser based application. AOauth gives you fine grained control over who can access the tunneled service through Access Control Lists without the need to share credentials.&lt;/li&gt;
  &lt;li&gt;Basic Authentication is the simplest form of authentication for your tunnels. It allows users to log in with a username and password and can be used as an alternative when using OAuth is not an option for you. Basic auth can also be used by headless clients without human interaction.&lt;/li&gt;
  &lt;li&gt;Bearer Token Authentication is recommended if you are exposing an HTTP API that needs to be accessible by headless clients. It can be used as the only authentication option or in combination with both OAuth and basic authentication.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Inlets has support for multiple OAuth providers like GitHub and Google. As a commercial user you get access to all providers. Please &lt;a href=&quot;https://inlets.dev/contact&quot;&gt;get in touch with the Inlets team&lt;/a&gt; if the OAuth provider you need is missing.&lt;/p&gt;</content><author><name>Han Verstraete</name></author><category term="tutorial" /><summary type="html">In this tutorial you will learn how to secure your tunnelled HTTP services using the Inlets built-in HTTP authentication.</summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://inlets.dev/images/2025-03-authenticate-http-tunnels/background.png" /><media:content medium="image" url="https://inlets.dev/images/2025-03-authenticate-http-tunnels/background.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Expose ArgoCD on the Internet with Inlets and Istio</title><link href="https://inlets.dev/blog/tutorial/2025/02/04/expose-argocd-with-inlets.html" rel="alternate" type="text/html" title="Expose ArgoCD on the Internet with Inlets and Istio" /><published>2025-02-04T00:00:00+00:00</published><updated>2025-02-04T00:00:00+00:00</updated><id>https://inlets.dev/blog/tutorial/2025/02/04/expose-argocd-with-inlets</id><content type="html" xml:base="https://inlets.dev/blog/tutorial/2025/02/04/expose-argocd-with-inlets.html">&lt;p&gt;In this tutorial, you will learn how to expose the ArgoCD dashboard on the Internet with &lt;a href=&quot;https://istio.io/&quot;&gt;Istio&lt;/a&gt; and the inlets-operator for Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://argo-cd.readthedocs.io/en/stable/&quot;&gt;ArgoCD&lt;/a&gt; is a popular tool for managing GitOps workflows and deploying applications to Kubernetes. It provides a web-based dashboard that allows you to view the state of your applications, compare them to the desired state, and sync them as needed. Another popular tool for GitOps workflows is &lt;a href=&quot;https://fluxcd.io/&quot;&gt;FluxCD&lt;/a&gt;, which does not ship with a built-in UI, &lt;a href=&quot;https://fluxcd.io/flux/#flux-uis&quot;&gt;add-ons are available&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you are running ArgoCD in a private VPC, in your homelab, or on-premises, then the inlets-operator can be used to quickly create a TCP tunnel to expose Istio’s Ingress Gateway to the Internet. This will allow you to access the ArgoCD dashboard from anywhere in the world.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025-02-argocd-istio/argo-welcome.png&quot; alt=&quot;ArgoCD login page exposed via Istio and Inlets&quot; /&gt;&lt;/p&gt;
&lt;blockquote&gt;
  &lt;p&gt;ArgoCD login page exposed via Istio and Inlets&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A different but related workflow we have seen with inlets tunnels, is where a number of remote Kubernetes clusters are tunneled back to a central Kubernetes cluster. From there, each can be added to ArgoCD and applications can be managed from a central location. This is a great way to manage multiple clusters and applications from a single dashboard. We covered that previously in &lt;a href=&quot;https://inlets.dev/blog/2022/08/10/managing-tunnel-servers-with-argocd.html&quot;&gt;How To Manage Inlets Tunnels Servers With Argo CD and GitOps&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;prerequisites&quot;&gt;Prerequisites&lt;/h2&gt;

&lt;p&gt;You will need a Kubernetes cluster running in a private network without ingress or Load Balancers. &lt;a href=&quot;https://kind.sigs.k8s.io/&quot;&gt;KinD&lt;/a&gt;, &lt;a href=&quot;https://k3s.io&quot;&gt;K3s&lt;/a&gt;, or &lt;a href=&quot;https://minikube.sigs.k8s.io/&quot;&gt;Minikube&lt;/a&gt; can be a convenient way to test these steps.&lt;/p&gt;

&lt;p&gt;We will install a number of Helm charts and CLIs during the tutorial. For convenience, &lt;a href=&quot;https://arkade.dev&quot;&gt;arkade&lt;/a&gt; will be used to install these tools, but you are free to install them in whatever way you prefer.&lt;/p&gt;

&lt;p&gt;You will also need a domain name under your control where you can create an A record to point to the public IP address of the inlets tunnel server.&lt;/p&gt;

&lt;p&gt;Personal and commercial licenses are available from the &lt;a href=&quot;https://inlets.dev/pricing/&quot;&gt;inlets website&lt;/a&gt; at a similar price to a cloud load balancer service. There are no restrictions on the number of domains that can be exposed over a single tunnel, and the tunnel is hosted in your own cloud account.&lt;/p&gt;

&lt;h2 id=&quot;install-the-inlets-operator&quot;&gt;Install the inlets-operator&lt;/h2&gt;

&lt;p&gt;The inlets-operator looks for LoadBalancer services and in response creates a VM in your cloud account with a public IP address. It then creates a Deployment for the inlets client within the cluster, and updates the LoadBalancer’s IP address with the public IP of the inlets server.&lt;/p&gt;

&lt;p&gt;From that point, you have a fully working TCP tunnel to your Kubernetes cluster, just like you’d get with a LoadBalancer service from a cloud provider.&lt;/p&gt;

&lt;p&gt;To install the inlets-operator with &lt;a href=&quot;https://m.do.co/c/2962aa9e56a1&quot;&gt;DigitalOcean&lt;/a&gt;, create an API token with read/write access and save it to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;~/do-access-token&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Create a tunnel in the lon1 region&lt;/span&gt;

&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;DO_REGION&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;lon1
arkade &lt;span class=&quot;nb&quot;&gt;install &lt;/span&gt;inlets-operator &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--provider&lt;/span&gt; digitalocean &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--region&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$DO_REGION&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--access-token-file&lt;/span&gt; ~/do-access-token
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You can find instructions for Helm and other providers like AWS EC2, GCE, Azure, Scaleway, and so forth in the &lt;a href=&quot;https://docs.inlets.dev/reference/inlets-operator/&quot;&gt;inlets-operator documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Along with the documentation, you can find the &lt;a href=&quot;https://github.com/inlets/inlets-operator/tree/master/chart/inlets-operator&quot;&gt;inlets-operator Helm chart&lt;/a&gt; on GitHub.&lt;/p&gt;

&lt;h2 id=&quot;install-argocd&quot;&gt;Install ArgoCD&lt;/h2&gt;

&lt;p&gt;If you haven’t already installed ArgoCD, you can do so with the following command:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;arkade &lt;span class=&quot;nb&quot;&gt;install &lt;/span&gt;argocd
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Now edit the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;argocd-server&lt;/code&gt; deployment and turn off its built-in self-signed certificate. We will be obtaining a certificate from Let’s Encrypt instead.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl edit deployment argocd-server &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; argocd
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Add the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--insecure&lt;/code&gt; flag to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;args&lt;/code&gt; section:&lt;/p&gt;

&lt;div class=&quot;language-diff highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;      containers:
      - args:
        - /usr/local/bin/argocd-server
&lt;span class=&quot;gi&quot;&gt;+       - --insecure
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;install-istio&quot;&gt;Install Istio&lt;/h2&gt;

&lt;p&gt;Install Istio with the following command:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;arkade &lt;span class=&quot;nb&quot;&gt;install &lt;/span&gt;istio
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;create-a-dns-record-for-the-argocd-dashboard&quot;&gt;Create a DNS record for the ArgoCD dashboard&lt;/h2&gt;

&lt;p&gt;Verify the public IP address of the inlets tunnel server:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;kubectl get svc &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; istio-system istio-ingressgateway
NAME                   TYPE           CLUSTER-IP   EXTERNAL-IP       PORT&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;S&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;                                      AGE
istio-ingressgateway   LoadBalancer   10.43.5.77   144.126.234.124   15021:32412/TCP,80:31062/TCP,443:32063/TCP   51m
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Next, create a DNS A record from &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;argocd.example.com&lt;/code&gt; to the public IP address of the inlets tunnel server.&lt;/p&gt;

&lt;h2 id=&quot;install-cert-manager&quot;&gt;Install cert-manager&lt;/h2&gt;

&lt;p&gt;Install cert-manager with the following command:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;arkade &lt;span class=&quot;nb&quot;&gt;install &lt;/span&gt;cert-manager
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;create-a-lets-encrypt-issuer-and-certificate&quot;&gt;Create a Let’s Encrypt Issuer and certificate&lt;/h2&gt;

&lt;p&gt;The Certificate must be created in the same namespace as the Istio Ingress Gateway, i.e. &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;istio-system&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Create a file called &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;letsencrypt-issuer.yaml&lt;/code&gt; with the following content:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;s&quot;&gt;export EMAIL=&quot;you@example.com&quot;&lt;/span&gt;

&lt;span class=&quot;s&quot;&gt;cat &amp;gt; issuer-prod.yaml &amp;lt;&amp;lt;EOF&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;cert-manager.io/v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Issuer&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;letsencrypt-prod&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;namespace&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;istio-system&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;acme&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;server&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;https://acme-v02.api.letsencrypt.org/directory&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;email&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;$EMAIL&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;privateKeySecretRef&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;letsencrypt-prod&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;solvers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;selector&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;{}&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;http01&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;ingress&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;class&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;istio&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;EOF&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Now create a Certificate resource:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;s&quot;&gt;cat &amp;gt; certificate.yaml &amp;lt;&amp;lt;EOF&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;export DOMAIN=&quot;argocd.example.com&quot;&lt;/span&gt;

&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;cert-manager.io/v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Certificate&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;argocd-server-cert&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;namespace&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;istio-system&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;secretName&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;argocd-server-tls&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;commonName&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;$DOMAIN&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;dnsNames&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;$DOMAIN&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;issuerRef&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;letsencrypt-prod&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Issuer&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;EOF&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Apply the resources:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl apply &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; letsencrypt-issuer.yaml
kubectl apply &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; certificate.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;expose-the-argocd-dashboard&quot;&gt;Expose the ArgoCD dashboard&lt;/h2&gt;

&lt;p&gt;Create a file called &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;argocd-gateway.yaml&lt;/code&gt; with the following content:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;s&quot;&gt;cat &amp;gt; gateway.yaml &amp;lt;&amp;lt;EOF&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;networking.istio.io/v1alpha3&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Gateway&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;argocd-gateway&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;namespace&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;argocd&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;selector&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;istio&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ingressgateway&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;servers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;port&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;number&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;80&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;http&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;protocol&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;HTTP&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;hosts&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;*&quot;&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;tls&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;httpsRedirect&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;true&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;port&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;number&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;443&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;https&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;protocol&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;HTTPS&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;hosts&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;*&quot;&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;tls&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;credentialName&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;argocd-server-tls&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;maxProtocolVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;TLSV1_3&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;minProtocolVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;TLSV1_2&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;mode&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;SIMPLE&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;cipherSuites&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
          &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ECDHE-ECDSA-AES128-GCM-SHA256&lt;/span&gt;
          &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ECDHE-RSA-AES128-GCM-SHA256&lt;/span&gt;
          &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ECDHE-ECDSA-AES128-SHA&lt;/span&gt;
          &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;AES128-GCM-SHA256&lt;/span&gt;
          &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;AES128-SHA&lt;/span&gt;
          &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ECDHE-ECDSA-AES256-GCM-SHA384&lt;/span&gt;
          &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ECDHE-RSA-AES256-GCM-SHA384&lt;/span&gt;
          &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ECDHE-ECDSA-AES256-SHA&lt;/span&gt;
          &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;AES256-GCM-SHA384&lt;/span&gt;
          &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;AES256-SHA&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Create a file called &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;argocd-virtualservice.yaml&lt;/code&gt; with the following content:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;networking.istio.io/v1alpha3&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;VirtualService&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;argocd-virtualservice&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;namespace&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;argocd&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;hosts&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;*&quot;&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;gateways&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;argocd-gateway&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;http&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;match&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;uri&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;prefix&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;route&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;destination&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;host&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;argocd-server&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;port&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
              &lt;span class=&quot;na&quot;&gt;number&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Apply the resources:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl apply &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; gateway.yaml
kubectl apply &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; virtualservice.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;access-the-argocd-dashboard&quot;&gt;Access the ArgoCD dashboard&lt;/h2&gt;

&lt;p&gt;At this point you should be able to access the ArgoCD dashboard at &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://argocd.example.com&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025-02-argocd-istio/argo-dash.png&quot; alt=&quot;ArgoCD dashboard exposed via my own domain&quot; /&gt;&lt;/p&gt;
&lt;blockquote&gt;
  &lt;p&gt;ArgoCD dashboard exposed via my own domain&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You can use the command given via &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;arkade info argocd&lt;/code&gt; to get the initial password for the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;admin&lt;/code&gt; user.&lt;/p&gt;

&lt;h2 id=&quot;wrapping-up&quot;&gt;Wrapping up&lt;/h2&gt;

&lt;p&gt;Exposing an application behind inlets requires no additional effort or changes to the application or configuration itself. It is a drop-in replacement for a cloud LoadBalancer service, and can be used to expose any TCP service running in your Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;The majority of the steps we covered were due to the need to turn off the self-signed certificate within ArgoCD, and to obtain a certificate from Let’s Encrypt instead. This is a good practice for any application that is exposed to the Internet. The certificates are trusted by most PCs already, are free to obtain, and rotated regularly.&lt;/p&gt;

&lt;p&gt;Both Istio and Ingress are common options for routing traffic and managing TLS termination. We covered Istio here to help customers who are already using Istio. We tend to prefer ingress-nginx ourselves for its simplicity and ease of use.&lt;/p&gt;

&lt;p&gt;The ArgoCD documentation covers how to use ingress-nginx and other Ingress controllers: &lt;a href=&quot;https://argo-cd.readthedocs.io/en/latest/operator-manual/ingress/&quot;&gt;Docs: ArgoCD Ingress Configuration&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://github.com/alexellis/arkade&quot;&gt;Arkade&lt;/a&gt; was used to install various Helm charts and CLIs purely for brevity, but you can use whatever tools you prefer to install them including Helm, brew or curl.&lt;/p&gt;

&lt;p&gt;If you are interested in learning more about inlets, check out the &lt;a href=&quot;https://docs.inlets.dev/&quot;&gt;inlets documentation&lt;/a&gt; or &lt;a href=&quot;https://inlets.dev/contact/&quot;&gt;reach out to talk to us&lt;/a&gt;.&lt;/p&gt;</content><author><name>Alex Ellis</name></author><category term="tutorial" /><category term="argocd" /><category term="istio" /><summary type="html">In this tutorial, you will learn how to expose the ArgoCD dashboard on the Internet with Istio and the inlets-operator for Kubernetes.</summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://inlets.dev/images/2025-02-argocd-istio/background.png" /><media:content medium="image" url="https://inlets.dev/images/2025-02-argocd-istio/background.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Quickstart - Automate &amp;amp; Scale Tunnels with Inlets Uplink</title><link href="https://inlets.dev/blog/tutorial/2024/12/09/quickstart-uplink.html" rel="alternate" type="text/html" title="Quickstart - Automate &amp;amp; Scale Tunnels with Inlets Uplink" /><published>2024-12-09T00:00:00+00:00</published><updated>2024-12-09T00:00:00+00:00</updated><id>https://inlets.dev/blog/tutorial/2024/12/09/quickstart-uplink</id><content type="html" xml:base="https://inlets.dev/blog/tutorial/2024/12/09/quickstart-uplink.html">&lt;p&gt;Inlets Uplink is a complete solution for automating tunnels, that scales from anywhere from ten to tens of thousands of tunnels.&lt;/p&gt;

&lt;p&gt;This guide get you started with deploying tunnels via the CLI, Kubernetes Custom Resource Definition (CRD), and REST API within about 30 minutes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Uplink vs inlets&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now, you may be familiar with &lt;a href=&quot;https://inlets.dev/&quot;&gt;inlets&lt;/a&gt; as a stand-alone binary, and container image, and may have tried out the inlets-operator, that creates LoadBalancers for Services in your Kubernetes cluster. The stand-alone version is used to expose a few services from a private network to the Internet using HTTP or TCP tunnels, and it’s great for small teams and individuals.&lt;/p&gt;

&lt;p&gt;Uplink was built for for DevOps teams in large companies, SaaS providers, IoT solutions, and hosting providers, who need to connect to many different remote endpoints using automation to make the process as seamless as possible. It provides packaging, APIs, and observability for automating the same inlets code that is used in stand-alone architectures.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;/images/2024-12-uplink-quickstart/conceptual-uplink.png&quot;&gt;&lt;img src=&quot;/images/2024-12-uplink-quickstart/conceptual-uplink.png&quot; alt=&quot;Conceptual diagram&quot; /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;blockquote&gt;
  &lt;p&gt;Conceptual diagram showing management via the REST API (client-api), and a private Kubernetes API server being tunneled back to the management cluster for automation via ArgoCD.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;About Uplink&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;It’s a scalable solution for automating tunnels&lt;/li&gt;
  &lt;li&gt;Installed via a Helm chart&lt;/li&gt;
  &lt;li&gt;Implements tenant isolation through Kubernetes namespaces&lt;/li&gt;
  &lt;li&gt;Includes a REST API, CLI and Custom Resource Definition (CRD) for managing tunnels&lt;/li&gt;
  &lt;li&gt;Includes detailed Prometheus metrics on active tunnels&lt;/li&gt;
  &lt;li&gt;Supports TCP and HTTP tunnels&lt;/li&gt;
  &lt;li&gt;Endpoints are private by default, but can be made public&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Private or public HTTP &amp;amp; TCP endpoints&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By default, each HTTP and TCP endpoint is kept private, and can only be accessed from within the Kubernetes cluster using a &lt;a href=&quot;https://kubernetes.io/docs/concepts/services-networking/service/&quot;&gt;ClusterIP Service&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This approach is ideal for managing customer endpoints or internal services that are hosted in private or hard to reach environments.&lt;/p&gt;

&lt;p&gt;For hosting providers, where you want some or all of the tunnels to be publicly accessible, you can turn on the “data router” component and use Kubernetes Ingress or Istio to route traffic from your custom domains to the tunnel server.&lt;/p&gt;

&lt;p&gt;When exposing tunnels to the Internet, you can create a new Ingress record for each domain, or use a wildcard domain so that a single Ingress record and TLS certificate can serve all tunnels. Learn more in: &lt;a href=&quot;https://docs.inlets.dev/uplink/expose-tunnels/&quot;&gt;Expose Tunnels to the Internet&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Our &lt;a href=&quot;https://inlets.dev/cloud&quot;&gt;inlets cloud&lt;/a&gt; product is built on top of multiple inlets uplink installations in different regions around the world. Our UI makes use of the REST API (client-api) that’s built into inlets uplink.&lt;/p&gt;

&lt;h2 id=&quot;quick-start&quot;&gt;Quick start&lt;/h2&gt;

&lt;p&gt;This guide is a quick start for installing inlets uplink as quickly as possible, and skips over some of the more advanced features like customizing the Helm chart, or enabling public endpoints, which are &lt;a href=&quot;https://docs.inlets.dev/uplink/&quot;&gt;mentioned in the documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We make the tutorial as fast as possible, we will use our arkade tool to install a few initial helm charts, but you are free to use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;helm&lt;/code&gt; directly if you prefer.&lt;/p&gt;

&lt;h3 id=&quot;bill-of-materials&quot;&gt;Bill of materials&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;A Kubernetes cluster with the ability to create public LoadBalancers&lt;/li&gt;
  &lt;li&gt;An &lt;a href=&quot;https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/&quot;&gt;Ingress controller&lt;/a&gt; or Istio&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://cert-manager.io/&quot;&gt;cert-manager&lt;/a&gt; to obtain TLS certificates&lt;/li&gt;
  &lt;li&gt;Helm 3&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://arkade.dev&quot;&gt;Arkade&lt;/a&gt; CLI&lt;/li&gt;
  &lt;li&gt;A domain under your control, where you can create a subdomain&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;install-the-ingress-controller-and-cert-manager&quot;&gt;Install the Ingress controller and cert-manager&lt;/h3&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;arkade &lt;span class=&quot;nb&quot;&gt;install &lt;/span&gt;ingress-nginx
arkade &lt;span class=&quot;nb&quot;&gt;install &lt;/span&gt;cert-manager
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Next, find the public address for your Ingress controller:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl get svc ingress-nginx-controller &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; ingress-nginx
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This will be an IP address or a DNS name, some provides such as AWS EKS will provide a DNS name. Create DNS A records in the next step if you received an IP address, otherwise create CNAME records.&lt;/p&gt;

&lt;h3 id=&quot;what-needs-to-be-public&quot;&gt;What needs to be public?&lt;/h3&gt;

&lt;p&gt;The only service that needs to be public is the client-router, which is used by the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;inlets-pro uplink client&lt;/code&gt; command via its &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--url wss://&lt;/code&gt; flag.&lt;/p&gt;

&lt;p&gt;The client-api can be kept private and accessed from within the cluster over HTTP, or it can be turned off completely. If you only intend to manage tunnels via the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;inlets-pro tunnel&lt;/code&gt; CLI, or the Kubernetes CRD (with Helm, ArgoCD, or kubectl), then the client-api can be disabled.&lt;/p&gt;

&lt;p&gt;Tunneled services will only be accessible via ClusterIP from within the Kubernetes cluster, so they are private by default. If needed, you can &lt;a href=&quot;https://docs.inlets.dev/uplink/expose-tunnels/&quot;&gt;Expose them on the Internet&lt;/a&gt; by following separate instructions.&lt;/p&gt;

&lt;h3 id=&quot;configure-the-uplink-helm-chart&quot;&gt;Configure the uplink Helm chart&lt;/h3&gt;

&lt;p&gt;Create two DNS A or CNAME records to the IP or DNS name given in the previous step:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The first is for the client-api, this is the REST API that can be used to manage tunnels - &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;us1.uplink.example.com&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;The second is for the client-router, this is the public endpoint that the inlets client will use - &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;clientapi.us1.uplink.example.com&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Next, edit values.yaml:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;LE_EMAIL&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;webmaster@example.com&quot;&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;CLIENT_ROUTER_DOMAIN&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;us1.uplink.example.com&quot;&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;CLIENT_API_DOMAIN&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;clientapi.us1.uplink.example.com&quot;&lt;/span&gt;

&lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt; &amp;gt; values.yaml
ingress:
  class: &quot;nginx&quot;
  issuer:
    enabled: true
    name: &quot;letsencrypt-prod&quot;
    email: &quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$LE_EMAIL&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot;

clientRouter:
  # Customer tunnels will connect with a URI of:
  # wss://uplink.example.com/namespace/tunnel
  domain: &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$CLIENT_ROUTER_DOMAIN&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;
  tls:
    ingress:
      enabled: true

clientApi:
  enabled: true
  domain: &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$CLIENT_API_DOMAIN&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;
  tls:
    ingress:
      enabled: true
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The REST API provided by the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;clientApi&lt;/code&gt; section is secured with an API token that we will generate in a moment, however OIDC/Auth2 can also be used, and is best for when you have several different uplink regions.&lt;/p&gt;

&lt;p&gt;The OIDC/OAuth2 authentication is set through the following, but is not recommended for the quick start, since it requires additional infrastructure such as Keycloak or Okta.&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;clientApi&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# When using OAuth/OIDC tokens to authenticate the API instead of&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# a shared secret, set the issuer URL here.&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;issuerURL&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;https://keycloak.inlets.dev/realms/inlets-cloud&quot;&lt;/span&gt;

  &lt;span class=&quot;c1&quot;&gt;# The audience is generally the same as the value of the domain field, however&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# some issuers like keycloak make the audience the client_id of the application/client.&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;audience&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;cloud.inlets.dev&quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Before installing the Helm chart, we need to make sure some secrets exist in the cluster.&lt;/p&gt;

&lt;p&gt;Create the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;inlets&lt;/code&gt; namespace, do not customise this for the quick start, since you’ll have to edit every command:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl create namespace inlets
kubectl label namespace inlets &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    inlets.dev/uplink&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Create a secret with the inlets-uplink license key:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl create secret generic &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; inlets inlets-uplink-license &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--from-file&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;license&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$HOME&lt;/span&gt;/.inlets/LICENSE_UPLINK
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Create the API token for the client-api:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;token&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;openssl rand &lt;span class=&quot;nt&quot;&gt;-base64&lt;/span&gt; 32|tr &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'\n'&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;mkdir&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$HOME&lt;/span&gt;/.inlets
&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$token&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$HOME&lt;/span&gt;/.inlets/client-api

kubectl create secret generic &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  client-api-token &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; inlets &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--from-file&lt;/span&gt; client-api-token&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$HOME&lt;/span&gt;/.inlets/client-api
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Now install the chart:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;helm upgrade &lt;span class=&quot;nt&quot;&gt;--install&lt;/span&gt; inlets-uplink &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  oci://ghcr.io/openfaasltd/inlets-uplink-provider &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--namespace&lt;/span&gt; inlets &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--values&lt;/span&gt; ./values.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;verify-the-installation&quot;&gt;Verify the installation:&lt;/h3&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Check the deployments were created and are running with 1/1 replicas:&lt;/span&gt;
kubectl get deploy &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; inlets

&lt;span class=&quot;c&quot;&gt;# Check the logs of the various components for any errors:&lt;/span&gt;
kubectl logs deploy/client-api &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; inlets
kubectl logs deploy/inlets-router &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; inlets
kubectl logs deploy/cloud-operator &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; inlets

&lt;span class=&quot;c&quot;&gt;# Make sure that the certificates were issued by cert-manager:&lt;/span&gt;
kubectl get certificate &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; inlets &lt;span class=&quot;nt&quot;&gt;-o&lt;/span&gt; wide
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;connect-a-remote-http-endpoint&quot;&gt;Connect a remote HTTP endpoint&lt;/h3&gt;

&lt;p&gt;You can define a HTTP or a TCP tunnel, for this example we will use HTTP.&lt;/p&gt;

&lt;p&gt;For a simple test, run the built-in HTTP fileserver from the inlets binary on your local machine, and share a new temporary folder:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;mkdir&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; /tmp/share
&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Hello from &lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;whoami&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; /tmp/share/index.html

&lt;span class=&quot;nb&quot;&gt;cd&lt;/span&gt; /tmp/share

inlets-pro fileserver &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--webroot&lt;/span&gt; /tmp/share &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--port&lt;/span&gt; 8080 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--allow-browsing&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;It can be created via a Kubernetes CRD, via the REST API, or via the CLI.&lt;/p&gt;

&lt;p&gt;You’ll find examples for each in the documentation.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;inlets-pro get tunnel

inlets-pro tunnel list

inlets-pro tunnel create &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    fileserver &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--upstream&lt;/span&gt; 127.0.0.1:8080
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Then get the connection string, you can format this as a CLI command or as Kubernetes YAML to expose a Pod, Service, etc within a private cluster.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;inlets-pro tunnel connect &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    fileserver &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--namespace&lt;/span&gt; inlets &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--domain&lt;/span&gt; https://&lt;span class=&quot;nv&quot;&gt;$CLIENT_ROUTER_DOMAIN&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The default output is for a CLI command you can run on your machine:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--format cli&lt;/code&gt; - default&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--format k8s_yaml&lt;/code&gt; - for Kubernetes YAML to apply to a private cluster&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--format systemd&lt;/code&gt; - generate a systemd unit file to install on a Linux machine&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both Kubernetes and systemd will restart the tunnel if it fails, and retain logs that you can view later.&lt;/p&gt;

&lt;p&gt;Start up the tunnel client with the command you were given.&lt;/p&gt;

&lt;p&gt;Remember, by default Inlets Uplink uses private endpoints, so you will need to run a Pod within the cluster to access the tunnel.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl run &lt;span class=&quot;nt&quot;&gt;--rm&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-it&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--restart&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;Never &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--image&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;alpine:latest &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--command&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Then install curl and access the tunnel:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# apk add curl&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# curl -i http://fileserver.inlets:8000/index.html&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;All HTTP tunnels bind to port 8000, and can multiplex multiple services over the same port using a Host header.&lt;/p&gt;

&lt;h3 id=&quot;create-a-tcp-tunnel&quot;&gt;Create a TCP tunnel&lt;/h3&gt;

&lt;p&gt;Perhaps you need to access a customer’s Postgres database from their private network?&lt;/p&gt;

&lt;p&gt;In this example we’ll define the tunnel using a Custom Resource instead of the CLI.&lt;/p&gt;

&lt;p&gt;Example Custom Resource to deploy a tunnel for a Postgres database:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;uplink.inlets.dev/v1alpha1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Tunnel&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;db1&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;namespace&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;inlets&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;licenseRef&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;inlets-uplink-license&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;namespace&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;inlets&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;tcpPorts&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;5432&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Alternatively the cli can be used to create a new tunnel:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;inlets-pro tunnel create db1 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--namespace&lt;/span&gt; inlets &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--port&lt;/span&gt; 5432
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The quickest way to spin up a Postgres instance on your own machine would be to use Docker:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;PASSWORD&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;head&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; 16 /dev/urandom |shasum&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$PASSWORD&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; ./postgres-password.txt

docker run &lt;span class=&quot;nt&quot;&gt;--rm&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--name&lt;/span&gt; postgres &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; 5432:5432 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-e&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$PASSWORD&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-ti&lt;/span&gt; postgres:latest
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Connect with an inlets uplink client&lt;/strong&gt;&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;inlets-pro tunnel connect db1 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--namespace&lt;/span&gt; inlets &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--domain&lt;/span&gt; https://&lt;span class=&quot;nv&quot;&gt;$CLIENT_ROUTER_DOMAIN&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--upstream&lt;/span&gt; 127.0.0.1:5432
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Run the above command on your local machine to generate the tunnel client command.&lt;/p&gt;

&lt;p&gt;Then run it on your local machine to connect to the tunnel.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Access the customer database from within Kubernetes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now that the tunnel is established, you can connect to the customer’s Postgres database from within Kubernetes using its ClusterIP &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;db1.inlets.svc.cluster.local&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;Try it out:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl run &lt;span class=&quot;nt&quot;&gt;-i&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-t&lt;/span&gt; psql &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--env&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;PGPORT&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;5432 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--env&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;PGPASSWORD&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; ./postres-password.txt&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--rm&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--image&lt;/span&gt; postgres:latest &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; psql &lt;span class=&quot;nt&quot;&gt;-U&lt;/span&gt; postgres &lt;span class=&quot;nt&quot;&gt;-h&lt;/span&gt; db1.inlets
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Try a command such as &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;CREATE database websites (url TEXT)&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;\dt&lt;/code&gt; or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;\l&lt;/code&gt;.&lt;/p&gt;

&lt;h3 id=&quot;create-a-tcp-tunnel-using-the-rest-api&quot;&gt;Create a TCP tunnel using the REST API&lt;/h3&gt;

&lt;p&gt;You can view the reference documentation for the REST API for Inlets Uplink here: &lt;a href=&quot;https://docs.inlets.dev/uplink/rest-api/&quot;&gt;REST API&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This example will tunnel a private Kubernetes cluster to your management cluster for administration or automation through tools such as kubectl, ArgoCD, Helm, Flux, or your own Kubernetes operators.&lt;/p&gt;

&lt;p&gt;If you don’t use Kubernetes, you can still try out the commands, then delete the tunnel without connecting to it.&lt;/p&gt;

&lt;p&gt;If you do want to try the example and don’t have a private cluster handy, you can create one using Docker via the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kind create cluster --name kube1&lt;/code&gt; command. The kind tool is available via &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;arkade get kind&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;As a general rule, the upstream should be &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubernetes.default.svc&lt;/code&gt; and the port should be &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;443&lt;/code&gt;, for K3s clusters, the port is often changed to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;6443&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Retrieve the API token for the client-api from Kubernetes:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;TOKEN&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;kubectl get secret &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; inlets client-api-token &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-o&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;jsonpath&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;{.data.client-api-token}&quot;&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  | &lt;span class=&quot;nb&quot;&gt;base64&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--decode&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Create a new tunnel using the REST API:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;CLIENT_API&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;https://clientapi.us1.uplink.example.com

&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;NAME&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;kube1&quot;&lt;/span&gt;

curl &lt;span class=&quot;nt&quot;&gt;-s&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Authorization: Bearer &lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;TOKEN&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-X&lt;/span&gt; POST &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nv&quot;&gt;$CLIENT_API&lt;/span&gt;/v1/tunnels &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'{&quot;name&quot;: &quot;kube1&quot;, &quot;namespace&quot;: &quot;inlets&quot;, &quot;tcpPorts&quot;: [443] }'&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  | jq

&lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;name&quot;&lt;/span&gt;:&lt;span class=&quot;s2&quot;&gt;&quot;kube1&quot;&lt;/span&gt;,&lt;span class=&quot;s2&quot;&gt;&quot;namespace&quot;&lt;/span&gt;:&lt;span class=&quot;s2&quot;&gt;&quot;inlets&quot;&lt;/span&gt;,&lt;span class=&quot;s2&quot;&gt;&quot;created&quot;&lt;/span&gt;:&lt;span class=&quot;s2&quot;&gt;&quot;2024-12-09T11:01:38Z&quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You can verify the result via the API or via &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl get tunnels.uplink.inlets.dev/kube1 &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; inlets &lt;span class=&quot;nt&quot;&gt;-o&lt;/span&gt; wide

NAME    AUTHTOKENNAME   DEPLOYMENTNAME   TCP PORTS   DOMAINS   INGRESS
kube1   kube1           kube1            &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;443]   
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;List the tunnels we created earlier, along with the new one with:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;curl &lt;span class=&quot;nt&quot;&gt;-s&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Authorization: Bearer &lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;TOKEN&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nv&quot;&gt;$CLIENT_API&lt;/span&gt;/v1/tunnels &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  | jq

&lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;
  &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
    &lt;span class=&quot;s2&quot;&gt;&quot;name&quot;&lt;/span&gt;: &lt;span class=&quot;s2&quot;&gt;&quot;kube1&quot;&lt;/span&gt;,
    &lt;span class=&quot;s2&quot;&gt;&quot;namespace&quot;&lt;/span&gt;: &lt;span class=&quot;s2&quot;&gt;&quot;inlets&quot;&lt;/span&gt;,
    &lt;span class=&quot;s2&quot;&gt;&quot;tcpPorts&quot;&lt;/span&gt;: &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;
      443
    &lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;,
    &lt;span class=&quot;s2&quot;&gt;&quot;connectedClients&quot;&lt;/span&gt;: 0,
    &lt;span class=&quot;s2&quot;&gt;&quot;created&quot;&lt;/span&gt;: &lt;span class=&quot;s2&quot;&gt;&quot;2024-12-09T11:01:38Z&quot;&lt;/span&gt;
  &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
&lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You can build a connection command using the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;inlets-pro tunnel connect&lt;/code&gt; command:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;CLIENT_ROUTER&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;us1.uplink.example.com

inlets-pro tunnel connect kube1 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--namespace&lt;/span&gt; inlets &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--domain&lt;/span&gt; https://&lt;span class=&quot;nv&quot;&gt;$CLIENT_ROUTER_DOMAIN&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--format&lt;/span&gt; k8s_yaml &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--upstream&lt;/span&gt; kubernetes.default.svc:443 &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; kube1-client.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Switch your Kubernetes cluster to the private cluster, then apply the YAML file for the inlets client with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl apply -f kube1-client.yaml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Check that the client connected:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl logs deploy/kube1-inlets-client

&lt;span class=&quot;nb&quot;&gt;time&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;2024/12/09 11:27:03&quot;&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;level&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;info &lt;span class=&quot;nv&quot;&gt;msg&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;Connecting to proxy&quot;&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;url&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;wss://us1.uplink.example.com/inlets/kube1&quot;&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;time&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;2024/12/09 11:27:03&quot;&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;level&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;info &lt;span class=&quot;nv&quot;&gt;msg&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;Connection established&quot;&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;client_id&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;51dbd4430bac4049b56f107481d25394
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Now switch back to the management cluster’s context.&lt;/p&gt;

&lt;p&gt;The cluster will be available via a ClusterIP in the inlets namespace name &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kube1.inlets&lt;/code&gt; on port 443.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Access the Kubernetes API server&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Either set up the additional TLS name for the Kubernetes API server’s SAN such as &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kube1.inlets&lt;/code&gt; (K3s makes this easy via &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--tls-san&lt;/code&gt;), or update the KUBECONFIG to provide the &lt;a href=&quot;https://docs.inlets.dev/tutorial/kubernetes-api-server/#update-your-kubeconfig-file-with-the-new-endpoint&quot;&gt;server name as per these instructions&lt;/a&gt;, or use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--insecure-skip-tls-verify&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Find the following section and edit it:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;cluster&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;server&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;https://kube1.inlets:443&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;tls-server-name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;kubernetes&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;For a quick test, run a Pod in the cluster and try to access the Kubernetes API server using &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--insecure-skip-tls-verify&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl run &lt;span class=&quot;nt&quot;&gt;-i&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-t&lt;/span&gt; kube1-connect &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--namespace&lt;/span&gt; inlets &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--image&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;alpine:latest &lt;span class=&quot;nt&quot;&gt;--rm&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--restart&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;Never &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; sh

&lt;span class=&quot;c&quot;&gt;# apk add kubectl curl&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# curl -i -k https://kube1.inlets:443&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Put your kubeconfig into place at .kube/config, update the server name and endpoint to https://kube1.inlets:443&lt;/p&gt;

&lt;p&gt;You can use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cat &amp;gt; .kube/config&lt;/code&gt; to create the file, then paste in the contents from your machine. Hit Control+D when done. This is quicker than installing an editor such as nano or vim into the container.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# cd&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# mkdir -p .kube&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# cat &amp;gt; .kube/config&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# kubectl --context kind-kube1 get node&lt;/span&gt;
NAME                  STATUS   ROLES           AGE   VERSION
kube1-control-plane   Ready    control-plane   19m   v1.31.0
&lt;span class=&quot;c&quot;&gt;# &lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;next-steps&quot;&gt;Next steps&lt;/h2&gt;

&lt;p&gt;In a short period of time, we installed inlets uplink to a Kubernetes cluster, and created public endpoints for the REST API (client-api) and the client-router. We then created three tunnels using the CLI, the CRD and the REST API. We used a single namespace for all the tunnels, but you can create a namespace per tenant, and then input the namespace into each of these approaches.&lt;/p&gt;

&lt;p&gt;Once you have services such as Postgresql, SSH, Ollama, the Kubernetes API server, or your own TCP/HTTP services tunneled back to the management cluster, you can start accessing the endpoints as if they were directly available within the Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;This means that all CLIs, tools, and products that work with whatever you’ve tunneled can be used without modification.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Common uses-cases for inlets-uplink&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Do you have an agent for your SaaS product, that customers need to run on private networks? Access it via a tunnel.&lt;/li&gt;
  &lt;li&gt;Perhaps you manage a number of remote databases? Use pgdump and pgrestore to backup and restore databases.&lt;/li&gt;
  &lt;li&gt;Do you deploy to Kubernetes? Use kubectl, Helm, ArgoCD, or Flux to deploy applications, just run them in-cluster.&lt;/li&gt;
  &lt;li&gt;Do you write your own Kubernetes operators for customers? Just provide the updated KUBECONFIG to your Kubernetes operators and controllers.&lt;/li&gt;
  &lt;li&gt;Do you want to access GPUs hosted on Lambda Labs, Paperspace, or your own datacenter? Command and control your GPU instances from your management cluster.&lt;/li&gt;
  &lt;li&gt;Do you have a powerful GPU somewhere and want to infer against it using your central cluster? Run ollama remotely, and tunnel its REST API back.&lt;/li&gt;
  &lt;li&gt;Do you have many different edge devices? Tunnel SSHD and run Ansible, Puppet, or bash scripts against them just as if they were on your local network.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the documentation you can learn more about managing, monitoring and automating tunnels.&lt;/p&gt;

&lt;p&gt;If you’re new to Kubernetes, and would like us to give you a hand setting everything up, we’d be happy to help you with the installation, as part of your subscription benefits.&lt;/p&gt;

&lt;p&gt;Would you like a demo, or to speak to our team? &lt;a href=&quot;https://inlets.dev/contact&quot;&gt;Reach out here for a meeting&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;See also:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://docs.inlets.dev/uplink/&quot;&gt;Inlets Uplink documentation&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://docs.inlets.dev/uplink/rest-api/&quot;&gt;Inlets Uplink REST API&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://docs.inlets.dev/uplink/monitoring-tunnels/&quot;&gt;Monitor Inlets Uplink tunnels&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://docs.inlets.dev/tutorial/kubernetes-api-server/&quot;&gt;Expose a Kubernetes API Server via inlets&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://docs.inlets.dev/uplink/expose-tunnels/&quot;&gt;Expose Inlets Uplink tunnels publicly for Ingress&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content><author><name>Alex Ellis</name></author><category term="tutorial" /><category term="tunnel" /><category term="management" /><category term="saas" /><category term="hosting" /><summary type="html">Inlets Uplink is a complete solution for automating tunnels, that scales from anywhere from ten to tens of thousands of tunnels.</summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://inlets.dev/images/2024-12-uplink-quickstart/background.png" /><media:content medium="image" url="https://inlets.dev/images/2024-12-uplink-quickstart/background.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">SSH Into Any Private Host With Inlets Cloud</title><link href="https://inlets.dev/blog/tutorial/2024/10/17/ssh-with-inlets-cloud.html" rel="alternate" type="text/html" title="SSH Into Any Private Host With Inlets Cloud" /><published>2024-10-17T00:00:00+00:00</published><updated>2024-10-17T00:00:00+00:00</updated><id>https://inlets.dev/blog/tutorial/2024/10/17/ssh-with-inlets-cloud</id><content type="html" xml:base="https://inlets.dev/blog/tutorial/2024/10/17/ssh-with-inlets-cloud.html">&lt;p&gt;When you’re away from home it’s not only convenient, but often necessary to connect back to your machines. This could be to connect to a remote VSCode instance, run a backup, check on a process, or to debug a problem. SSH can also be used to port-forward services, or to copy files with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;scp&lt;/code&gt; or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;rsync&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In this post I’ll show you how to use inlets cloud to get SSH access to any host on your network without needing a VPN, and without having to host a tunnel server. You’ll be able to expose one or more machines over a single tunnel, using DNS names to route traffic.&lt;/p&gt;

&lt;h2 id=&quot;what-is-inlets-cloud&quot;&gt;What is inlets cloud?&lt;/h2&gt;

&lt;p&gt;inlets cloud is a SaaS hosted version of inlets-pro servers, and it makes for a convenient way to share local webservices, APIs, and endpoints like a blog, database server, Ollama endpoint, or a Kubernetes cluster. The client for inlets cloud uses TCP passthrough, so when your service uses SSH or TLS, there’s no way for our infrastructure to decrypt the traffic.&lt;/p&gt;

&lt;p&gt;You can already use a self-hosted tunnel server to expose SSH for a single host, or for many with sshmux:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://docs.inlets.dev/tutorial/ssh-tcp-tunnel/&quot;&gt;Expose a single host over SSH&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://inlets.dev/blog/2024/02/05/access-all-your-ssh-servers-with-sshmux.html&quot;&gt;Expose multiple hosts over SSH&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But in this tutorial, we’re going to do it with inlets-cloud, without having to host, maintain or even consider any infrastructure. There are two regions for the beta - the EU and USA.&lt;/p&gt;

&lt;p&gt;Traditionally, SSH had to be exposed on a public IP address using port-forwarding rules, or a random TCP port using a SaaS tunnel, or perhaps using a complex SaaS VPN solution.&lt;/p&gt;

&lt;p&gt;SSH is a simple technology that is designed to be exposed to the public Internet, the main issue is that it’s hard to multiplex multiple hosts over a single port.&lt;/p&gt;

&lt;p&gt;We built &lt;a href=&quot;https://inlets.dev/blog/2024/02/05/access-all-your-ssh-servers-with-sshmux.html&quot;&gt;sshmux&lt;/a&gt; into inlets to solve this problem, and now you can use it with inlets-cloud.&lt;/p&gt;

&lt;p&gt;The diagram below shows sshmux with a self-hosted tunnel server, but in this tutorial, we’ll be using inlets-cloud to provide the hosting instead.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://inlets.dev/images/2024-02-sshmux/conceptual.png&quot; alt=&quot;Self-hosted sshmux tunnel&quot; /&gt;&lt;/p&gt;

&lt;p&gt;sshmux is a simple way to expose multiple SSH backends over a single port, using a DNS name to route traffic. Just create a wildcard domain entry, or add one entry per host, and then configure sshmux to direct traffic accordingly.&lt;/p&gt;

&lt;p&gt;You will of course need to follow the general security advise on hardening SSH, which is easy to find on the Internet, or via a brief chat with ChatGPT.&lt;/p&gt;

&lt;h2 id=&quot;prerequisites&quot;&gt;Prerequisites&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;You’ll need a domain name, and the ability to create CNAME or A records, you’ll create one entry per host that you want to access.&lt;/li&gt;
  &lt;li&gt;OpenSSH set up on one or more machines in your private network, follow the usual security precautions like disabling &lt;em&gt;Root access&lt;/em&gt; and &lt;em&gt;Password authentication&lt;/em&gt;&lt;/li&gt;
  &lt;li&gt;An inlets cloud account, you can sign up for free at &lt;a href=&quot;https://cloud.inlets.dev&quot;&gt;cloud.inlets.dev&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;install-the-inlets-pro-client&quot;&gt;Install the inlets-pro client&lt;/h2&gt;

&lt;p&gt;Since the inlets-pro tunnel server will be hosted for us, we only need to download the inlets-pro client.&lt;/p&gt;

&lt;p&gt;If you haven’t already, install the inlets-pro client:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;curl &lt;span class=&quot;nt&quot;&gt;-sLSf&lt;/span&gt; https://get.arkade.dev | &lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;sh

arkade get inlets-pro
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Alternatively, download the binary from the &lt;a href=&quot;https://github.com/inlets/inlets-pro/releases&quot;&gt;inlets-pro releases page&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;create-a-tunnel-on-inlets-cloud&quot;&gt;Create a tunnel on inlets-cloud&lt;/h2&gt;

&lt;p&gt;Create a new tunnel, the name is not important, but the list of sub-domains is, this is where you add one entry for each host you want to access.&lt;/p&gt;

&lt;p&gt;You can add more after creating the tunnel, so feel free to start with one, if that’s easier.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2024-10-inlets-cloud-ssh/create-tunnel.png&quot; alt=&quot;Create a new tunnel&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Create any CNAME entries in your DNS provider as directed, and verify the top-level domain with a TXT record.&lt;/p&gt;

&lt;p&gt;Then navigate back to the tunnel details page, and copy the text under “Caddy/Nginx”:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2024-10-inlets-cloud-ssh/copy-connect.png&quot; alt=&quot;Create a new tunnel&quot; /&gt;&lt;/p&gt;

&lt;p&gt;We want to adjust it slightly, so that we can use it with sshmux instead of Nginx or Caddy. We’ll be directing in SSH traffic, not TLS traffic for webservers.&lt;/p&gt;

&lt;div class=&quot;language-diff highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;p&quot;&gt;inlets-pro uplink client \
&lt;/span&gt;  --url=&quot;wss://cambs1.uplink.inlets.dev/alexellis/sshmux&quot; \
  --token=******** \
&lt;span class=&quot;gd&quot;&gt;-  --upstream=80=127.0.0.1:80 \
&lt;/span&gt;&lt;span class=&quot;gi&quot;&gt;+  --upstream=443=127.0.0.1:8443
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;We remap any incoming requests to the hosted server on port 443, to port 8443, which is the default port for sshmux.&lt;/p&gt;

&lt;p&gt;Run the command to start the tunnel, when connected, you’ll see the text: “Connection established”.&lt;/p&gt;

&lt;h2 id=&quot;start-sshmmux&quot;&gt;Start sshmmux&lt;/h2&gt;

&lt;p&gt;sshmux is an SSH multiplexer that can expose multiple SSH backends over a single port, using a DNS name to route traffic.&lt;/p&gt;

&lt;p&gt;Typically, other solutions will require you to use a different port for each host, and then you have to memorise random numbers, or resort to clever SaaS-based VPNs.&lt;/p&gt;

&lt;p&gt;Create a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;config.yaml&lt;/code&gt; file:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;upstreams&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nuc.example.com&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;upstream&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;192.168.0.200:22&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nas.example.com&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;upstream&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;10.0.0.2:2222&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Then start sshmux:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;inlets-pro sshmux server &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--port&lt;/span&gt; 8443 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    config.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Now switch to a machine you’ll use to connect to the SSH services. This can be the same host for the sake of testing, but would probably be your laptop.&lt;/p&gt;

&lt;p&gt;SSH does not support using a SNI header, or a TLS hostname, so we use sshmux to wrap the traffic with this header.&lt;/p&gt;

&lt;p&gt;You can use the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;openssl&lt;/code&gt; tool for this, or our convenience command &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;inlets-pro sshmux connect&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Edit .ssh/config:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;Host &lt;span class=&quot;k&quot;&gt;*&lt;/span&gt;.example.com
    HostName %h
    Port 443
    ProxyCommand inlets-pro sshmux connect cambs1.uplink.inlets.dev:%p %h
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The text &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cambs1.uplink.inlets.dev&lt;/code&gt; is the DNS entry you used for the CNAMES in the previous step, so if you’re using a different region, use that value here instead.&lt;/p&gt;

&lt;p&gt;All this does is to tell SSH to use the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;inlets-pro sshmux connect&lt;/code&gt; command to connect to the remote host, and to pass the hostname and port number to the command.&lt;/p&gt;

&lt;h2 id=&quot;connect-to-one-of-your-hosts&quot;&gt;Connect to one of your hosts&lt;/h2&gt;

&lt;p&gt;Now you can connect to one of your hosts:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;ssh nuc.example.com
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You can also use the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;-L&lt;/code&gt; flag to forward ports or services running on the remote host to your local machine.&lt;/p&gt;

&lt;p&gt;For instance, if you were running a Node.js application on port 3000 on the remote host, you could forward it to your local machine like this:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;ssh &lt;span class=&quot;nt&quot;&gt;-L&lt;/span&gt; 3000:127.0.0.1:3000 nuc.example.com
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;If you have a Kubernetes cluster on the remote machine, you can port-forward services from it to your local machine, whilst on a different network. For instance, if the remote cluster is running &lt;a href=&quot;http://openfaas.com&quot;&gt;OpenFaaS CE&lt;/a&gt;, and you wanted to access its Prometheus dashboard, you could do this:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;ssh &lt;span class=&quot;nt&quot;&gt;-L&lt;/span&gt; 9090:127.0.0.1:9090 nuc.example.com
&lt;span class=&quot;c&quot;&gt;# kubectl port-forward -n openfaas svc/prometheus 9090:9090&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Then open a browser to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;http://localhost:9090&lt;/code&gt; to access the Prometheus dashboard.&lt;/p&gt;

&lt;p&gt;You can specify multiple hosts and ports, i.e. for both Prometheus and the OpenFaaS gateway:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;ssh &lt;span class=&quot;nt&quot;&gt;-L&lt;/span&gt; 9090:127.0.0.1:9090 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;-L&lt;/span&gt; 8080:127.0.0.1:8080 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    nuc.example.com
&lt;span class=&quot;c&quot;&gt;# kubectl port-forward -n openfaas svc/prometheus 9090:9090 &amp;amp;&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# kubectl port-forward -n openfaas svc/gateway 8080:8080 &amp;amp;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Of course you can also copy files with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;scp&lt;/code&gt; or use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;rsync&lt;/code&gt; over the SSH connection.&lt;/p&gt;

&lt;p&gt;Copy a single remote file to your host:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;scp nuc.example.com:~/debug.log &lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Rsync the code for k3sup, that you’re hacking on to the remote computer:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;rsync &lt;span class=&quot;nt&quot;&gt;-av&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-r&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-e&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;ssh&quot;&lt;/span&gt; ~/go/src/github.com/alexellis/k3sup &lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The port override for &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;443&lt;/code&gt; is not necessary since the .ssh/config file will handle this, but you can explicitly add the flag if you want. &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ssh&lt;/code&gt; uses &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;-p&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;scp&lt;/code&gt; uses &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;-P&lt;/code&gt;.&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;You’ve now got a secure way to access any host on your network, without needing to host a tunnel server, or to set up a VPN. This is a great way to access your home network, or to provide support to friends and family.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Adding extra hosts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Any time you want to add or remove a host, you can do so via the inlets-cloud dashboard, by navigating to the “Tunnels” page and editing the list of domains. Then make sure you have an entry in your sshmux config file, and restart it with the new configuration.&lt;/p&gt;

&lt;p&gt;Access is completely private, there is no way to decrypt the SSH traffic, and it gets passed directly on to your own machine inside your local network.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IP filtering/allow list&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For taking things further, sshmux also supports an IP allow list, which is available for inlets-cloud and self-hosted tunnels.&lt;/p&gt;

&lt;p&gt;If the IP for your mobile hotspot was 35.202.222.154, you could write the following to restrict access to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;nuc.example.com&lt;/code&gt; to only yourself:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;upstreams&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nuc.example.com&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;upstream&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;192.168.0.120:22&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;allowed&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;35.202.222.154&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Then just add a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--proxy-protocol&lt;/code&gt; argument to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;inlets-pro sshmux&lt;/code&gt; command before restarting the command. You can use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;v1&lt;/code&gt; or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;v2&lt;/code&gt; as the argument, just make sure it is the same as the one you selected for the tunnel server.&lt;/p&gt;

&lt;p&gt;Watch a video walk-through of this tutorial:&lt;/p&gt;

&lt;div style=&quot;width: ; margin:0 auto;&quot;&gt;
    
    &lt;div class=&quot;ytcontainer&quot;&gt;
        &lt;iframe width=&quot;560&quot; height=&quot;315&quot; class=&quot;yt&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot; allowfullscreen=&quot;&quot; src=&quot;https://www.youtube.com/embed/ws3-VlL2884&quot;&gt;&lt;/iframe&gt;
    &lt;/div&gt;
&lt;/div&gt;</content><author><name>Alex Ellis</name></author><category term="tutorial" /><category term="ssh" /><category term="remoteaccess" /><category term="inletscloud" /><category term="support" /><summary type="html">When you’re away from home it’s not only convenient, but often necessary to connect back to your machines. This could be to connect to a remote VSCode instance, run a backup, check on a process, or to debug a problem. SSH can also be used to port-forward services, or to copy files with scp or rsync.</summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://inlets.dev/images/2024-10-inlets-cloud-ssh/background.jpg" /><media:content medium="image" url="https://inlets.dev/images/2024-10-inlets-cloud-ssh/background.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Get Real Client IPs with Ingress Nginx, Caddy or Traefik</title><link href="https://inlets.dev/blog/tutorial/2024/10/08/real-client-ips-ingress-nginx-caddy-traefik.html" rel="alternate" type="text/html" title="Get Real Client IPs with Ingress Nginx, Caddy or Traefik" /><published>2024-10-08T00:00:00+00:00</published><updated>2024-10-08T00:00:00+00:00</updated><id>https://inlets.dev/blog/tutorial/2024/10/08/real-client-ips-ingress-nginx-caddy-traefik</id><content type="html" xml:base="https://inlets.dev/blog/tutorial/2024/10/08/real-client-ips-ingress-nginx-caddy-traefik.html">&lt;p&gt;When you’re running a reverse proxy directly on a host, or an Ingress Controller in Kubernetes, you can get the real client IP with inlets.&lt;/p&gt;

&lt;p&gt;The real client IP address is required for rate-limiting, effective logging, understanding where your users are coming from geographically, and to prevent abuse. Just bear in mind that if you choose to store these addresses within a database, or server logs, you may need to comply with data protection laws like GDPR.&lt;/p&gt;

&lt;p&gt;We’ve covered how Proxy Protocol works before in the original post &lt;a href=&quot;https://inlets.dev/blog/2022/09/02/real-client-ips-with-proxy-protocol.html&quot;&gt;Get Real Client IPs with K3s and Traefik&lt;/a&gt;, so that’s a good refresher if you’d like to cover the fundamentals again.&lt;/p&gt;

&lt;p&gt;In this post we’ll focus on the configuration you need, so you can come back here and copy/paste it when you need it.&lt;/p&gt;

&lt;h2 id=&quot;the-inlets-setup&quot;&gt;The Inlets Setup&lt;/h2&gt;

&lt;p&gt;Deploy a host and install the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;inlets-pro tcp server&lt;/code&gt;, you can &lt;a href=&quot;https://docs.inlets.dev/tutorial/manual-tcp-server/&quot;&gt;do this manually&lt;/a&gt; or via cloud-init.&lt;/p&gt;

&lt;p&gt;The &lt;a href=&quot;https://docs.inlets.dev/reference/inletsctl/&quot;&gt;inletsctl tool&lt;/a&gt; can create a host using a cloud provider’s API, and pre-install the inlets-pro server for you, with a randomly generated authentication token, and will print out all the details at the end.&lt;/p&gt;

&lt;p&gt;Whichever method you take, log into the host, and edit the systemd unit file for inlets-pro, find it via &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sudo systemctl cat inlets-pro&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Add &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--proxy-protocol=v2&lt;/code&gt; to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ExecStart&lt;/code&gt; line, if it’s already present with an empty value, update it instead.&lt;/p&gt;

&lt;p&gt;The v2 protocol is widely supported and more efficient than v1, since it sends text in a binary format, not in a human-readable format.&lt;/p&gt;

&lt;p&gt;This article assumes that you are running the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;inlets-pro tcp server&lt;/code&gt; process directly on an Internet-facing host. If you are running it behind a cloud load-balancer, you’ll need to add the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--lb-proxy-protocol&lt;/code&gt; flag to the inlets-pro server specifying the protocol version sent by the load-balancer. The rest of the article applies in the same way.&lt;/p&gt;

&lt;h2 id=&quot;real-ips-for-caddy&quot;&gt;Real IPs for Caddy&lt;/h2&gt;

&lt;p&gt;Caddy can be installed quickly, including its systemd unit file, special caddy user, and extra directories with the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;arkade system install caddy&lt;/code&gt; command. You can also use a custom build, or run through all the manual steps yourself from the &lt;a href=&quot;https://caddyserver.com/docs/getting-started&quot;&gt;Caddy documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I’ve included this section for when you want to run a reverse proxy in a VM, container, or directly on your machine. The other examples are focused on running a reverse proxy in Kubernetes, called an Ingress Controller. For instance, you may be running OpenFaaS via &lt;a href=&quot;https://github.com/openfaas/faasd&quot;&gt;faasd CE&lt;/a&gt;. In that case, Caddy is a quick way to get TLS termination for your OpenFaaS functions, and anything else you are running in your setup like Grafana.&lt;/p&gt;

&lt;p&gt;The following settings are for when you run Caddy directly on your own machine, and use an inlets TCP tunnel server to expose it to the Internet, pointing ports 80 and 443 to your Caddy instance.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;{
    email &quot;webmaster@example.com&quot;

    acme_ca https://acme-v02.api.letsencrypt.org/directory
    http_port 80
    https_port 443

   servers {
     listener_wrappers {
       proxy_protocol {
         timeout 2s
         allow 0.0.0.0/0
       }
      tls
    }
 }
}

orders.example.com {
    reverse_proxy 127.0.0.1:8080
}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;There are a number of extra settings over a basic Caddyfile for Let’s Encrypt, but the main one we need is the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;proxy_protocol&lt;/code&gt; listener wrapper.&lt;/p&gt;

&lt;p&gt;You’ll see I’ve also included an upstream for &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;orders.example.com&lt;/code&gt; which is a plain HTTP service running on port 8080. It will receive the real client IP from Caddy, and can read it from the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;X-Real-IP&lt;/code&gt; header.&lt;/p&gt;

&lt;h2 id=&quot;real-ips-for-ingress-nginx&quot;&gt;Real IPs for ingress-nginx&lt;/h2&gt;

&lt;p&gt;I sent to install ingress-nginx via arkade, with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;arkade install ingress-nginx&lt;/code&gt;. This is similar to applying the static YAML that is available in the &lt;a href=&quot;https://kubernetes.github.io/ingress-nginx/deploy/&quot;&gt;project’s documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;a href=&quot;https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-proxy-protocol&quot;&gt;ingress-nginx documentation site&lt;/a&gt; explains the various settings that can be configured for an installation of ingress-nginx. One of those options is for Proxy Protocol. You don’t need to set a version, just set it to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;true&lt;/code&gt; and either version will be accepted.&lt;/p&gt;

&lt;p&gt;Edit the ConfigMap for ingress-nginx, when installed via arkade, it will be called &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ingress-nginx-controller&lt;/code&gt;, so:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl edit configmap ingress-nginx-controller
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Within the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;data:&lt;/code&gt; section, add:&lt;/p&gt;

&lt;div class=&quot;language-diff highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;p&quot;&gt;data:
&lt;/span&gt;&lt;span class=&quot;gi&quot;&gt;+  use-proxy-protocol: &quot;true&quot;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;There are some additional related headers, which you can customise:&lt;/p&gt;

&lt;div class=&quot;language-diff highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;p&quot;&gt;data:
&lt;/span&gt;&lt;span class=&quot;gi&quot;&gt;+  compute-full-forwarded-for: &quot;true&quot;
+  enable-real-ip: &quot;true&quot;
+  proxy-protocol-header-timeout: 1s
+  set-real-ip-from: 0.0.0.0/0
+  use-forwarded-headers: &quot;true&quot;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Once updated, the controller will reload its settings and will only accept requests which have a Proxy Protocol header. If you send a request without the header, it will be rejected, so it must only be accessed via the inlets tunnel.&lt;/p&gt;

&lt;h2 id=&quot;real-ips-for-traefik&quot;&gt;Real IPs for Traefik&lt;/h2&gt;

&lt;p&gt;This section was taken &lt;a href=&quot;https://inlets.dev/blog/2022/09/02/real-client-ips-with-proxy-protocol.html&quot;&gt;from the original blog post&lt;/a&gt;. You can refer there for more details.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://traefik.io&quot;&gt;Traefik&lt;/a&gt; ships with &lt;a href=&quot;https://k3s.io&quot;&gt;K3s&lt;/a&gt; by default, and is installed into the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kube-system&lt;/code&gt; namespace.&lt;/p&gt;

&lt;p&gt;When I create k3s clusters with &lt;a href=&quot;https://k3sup.dev&quot;&gt;k3sup&lt;/a&gt;, I tend to turn off Traefik in order to add ingress-nginx which I find to be simpler, broadly used in production setups, and easier to operate. I just run: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;k3sup install --no-extras&lt;/code&gt; to make sure Traefik won’t be installed.&lt;/p&gt;

&lt;p&gt;If you want to use Traefik, you can do so by editing the deployment:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; kube-system edit deployment traefik
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Then add the following flags:&lt;/p&gt;

&lt;div class=&quot;language-diff highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;    spec:                             
      containers:                              
      - args:                                                         
&lt;span class=&quot;gi&quot;&gt;+        - --entryPoints.web.proxyProtocol.insecure=true
+        - --entryPoints.web.proxyProtocol.trustedIPs=0.0.0.0/24
+        - --entryPoints.websecure.proxyProtocol.insecure=true
+        - --entrypoints.websecure.http.tls
+        - --entrypoints.web.address=:8000/tcp                
+        - --entrypoints.websecure.address=:8443/tcp
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I also add &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;- --accesslog=true&lt;/code&gt; to help find any potential issues with the configuration.&lt;/p&gt;

&lt;p&gt;If Traefik doesn’t detect the settings immediately, you can restart it with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl rollout restart -n kube-system deployment traefik&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If you wish to swap Traefik for ingress-nginx, you can run:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl delete &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; kube-system deployment traefik
kubectl delete &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; kube-system service traefik
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;wrapping-up&quot;&gt;Wrapping up&lt;/h2&gt;

&lt;p&gt;I wanted this article to be a short and sweet reference for you, on how to configure the most popular reverse proxies to accept the Proxy Protocol header, so that your applications can get the real client IP.&lt;/p&gt;

&lt;p&gt;If you’re running an alternative Kubernetes Ingress Controller, &lt;a href=&quot;https://istio.io/latest/docs/ops/configuration/traffic-management/network-topologies/#proxy-protocol&quot;&gt;Istio Gateway&lt;/a&gt;, or a stand-alone proxy, all you need to do after configuring the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;inlets-pro tcp server&lt;/code&gt; is to enable the Proxy Protocol support using the appropriate settings.&lt;/p&gt;

&lt;p&gt;If you have any questions or suggestions, please feel free to reach out. Whenever you sign up for a subscription for inlets, you’ll get an invite to our Discord community. If you signed up some time, ago reach out via the form on the website and we’ll get you an invite.&lt;/p&gt;

&lt;p&gt;See also:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://k3sup.dev&quot;&gt;K3sup - install K3s remotely via SSH&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://inlets.dev/docs/inletsctl/&quot;&gt;inletsctl - automate cloud hosts for inlets-pro servers&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/alexellis/arkade&quot;&gt;arkade - Open Source Marketplace For Developer Tools&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://caddyserver.com&quot;&gt;Caddy - the HTTP/2 web server with automatic HTTPS&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://kubernetes.github.io/ingress-nginx/&quot;&gt;Ingress Nginx - Ingress controller for Kubernetes&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://traefik.io&quot;&gt;Traefik - The Cloud Native Edge Router&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content><author><name>Alex Ellis</name></author><category term="tutorial" /><category term="load-balancer" /><category term="ingress-controller" /><category term="reverse-proxy" /><category term="proxy-protocol" /><summary type="html">When you’re running a reverse proxy directly on a host, or an Ingress Controller in Kubernetes, you can get the real client IP with inlets.</summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://inlets.dev/images/2024-10-real-ips/background.png" /><media:content medium="image" url="https://inlets.dev/images/2024-10-real-ips/background.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Create Highly Available Tunnels With A Load Balancer</title><link href="https://inlets.dev/blog/tutorial/2024/09/02/highly-available-tunnels.html" rel="alternate" type="text/html" title="Create Highly Available Tunnels With A Load Balancer" /><published>2024-09-02T00:00:00+00:00</published><updated>2024-09-02T00:00:00+00:00</updated><id>https://inlets.dev/blog/tutorial/2024/09/02/highly-available-tunnels</id><content type="html" xml:base="https://inlets.dev/blog/tutorial/2024/09/02/highly-available-tunnels.html">&lt;p&gt;We look at Highly Available inlets tunnels, how to integrate with Proxy Protocol to get original source IP addresses, and how to configure a cloud load balancer.&lt;/p&gt;

&lt;p&gt;For the majority of use-cases, whether for development or production, a single tunnel server VM and client as a pair will be more than sufficient.&lt;/p&gt;

&lt;p&gt;However, some teams mandate that all infrastructure is run in a Highly Available (HA) configuration, where two or more servers or instances of a service are running at all times. This is to ensure that if one server fails, the other can take over and continue to serve traffic.&lt;/p&gt;

&lt;p&gt;If you’re using a public cloud offering like AWS, GCP, Hetzner, DigitalOcean, or Linode, you’ll have access to managed load balancers. These are easy to deploy, come with their own stable public IP address, and can be used to route traffic to one or more virtual machines. How does failover work? Typically, when you create the load balancer, you’ll specify a health check, which the load balancer will run on a continual basis, if it detects that one of the servers is unhealthy, it will stop sending traffic to it.&lt;/p&gt;

&lt;p&gt;What is a cloud load balancer anyway? If you dig around, you’ll find documentation, blog posts and conference talks where cloud vendors explain how they use either HAProxy, keepalived, or Envoy to provide their managed load-balancers. For anyone who does not have access to a managed load balancer, you can configure and deploy these open source tools yourself, just make sure that your load balancer itself does not become its own Single Point of Failure (SPOF).&lt;/p&gt;

&lt;p&gt;In this post, we’ll look at how to create a Highly Available inlets tunnel with a cloud load balancer, and Proxy Protocol support, to get the original source IP addresses.&lt;/p&gt;

&lt;h3 id=&quot;a-note-from-an-inlets-user-on-ha&quot;&gt;A note from an inlets user on HA&lt;/h3&gt;

&lt;p&gt;Jack Harmon is a long time inlets user, and sent in a photo of his homelab, which is running Kubernetes (setup with kubeadm) and Traefik as an Ingress Controller. He has a HA tunnel configuration using a Global LoadBalancer on DigitalOcean.&lt;/p&gt;

&lt;p&gt;Here’s what Jack had to say about his setup:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;I used to work for a company that set up secure cloud infrastructure for governments, which is where I got interested in home labs and HA config, as well as zero trust security. I’ve since left and do financial consulting for businesses. I now have a home lab running in my closet, and another in a city apartment for redundancy.&lt;/p&gt;

  &lt;p&gt;I use my setup to host my various personal software projects, file sharing, to offer hosting to friends and family, and yes - as a secure way to store client information (or show them financial dashboards) on occasion. Mostly, though, it’s an extremely overbuilt hobby project that I’ve sunk thousands of hours into over the years. I realize there may be a slightly cheaper option with its own limitations, but I prefer the privacy and control, and to support independent developers like yourself.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href=&quot;/images/2024-09-ha-tunnels/jack-lab.jpg&quot;&gt;&lt;img src=&quot;/images/2024-09-ha-tunnels/jack-lab.jpg&quot; alt=&quot;Jack's lab&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Jack also wanted to get SSH access into various VMs in the lab, so I told him about our &lt;a href=&quot;https://inlets.dev/blog/2024/02/05/access-all-your-ssh-servers-with-sshmux.html&quot;&gt;sshmux add-on for inlets&lt;/a&gt; where you can expose dozens of sshd servers over a single inlets tunnel. This is a great way to get access to your VMs without needing to expose them directly to the internet. Of course, if you do go down this route, make sure you disable Password login, so that you’re only using SSH keys. SSH keys are going to be almost impossible to attack with brute-force.&lt;/p&gt;

&lt;h2 id=&quot;proxy-protocol-for-real-client-ips&quot;&gt;Proxy Protocol for real client IPs&lt;/h2&gt;

&lt;p&gt;When a TCP connection is made from one server to another, the source IP address is the IP of the server that initiated the connection. This is a problem if, like many of the servers on the Internet, they are directly exposed to their users. Whether you’re working with a Pod in Kubernetes, a VM in an autoscaling group behind a Load Balancer, a service hidden behind Nginx, or a service exposed via inlets, if you want the real IP address of the client, you will need to make use of the Proxy Protocol.&lt;/p&gt;

&lt;p&gt;The &lt;a href=&quot;https://www.haproxy.com/blog/use-the-proxy-protocol-to-preserve-a-clients-ip-address&quot;&gt;Proxy Protocol&lt;/a&gt; (popularised by HAProxy) is a simple protocol that is sent at the beginning of a TCP connection, and contains the original source IP address and port of the client. This is then passed through the proxy, and can be used by the service to determine the real IP address of the client. There are two versions, v1 which is sent in plain text, and v2 which is sent in binary.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;/images/2024-09-ha-tunnels/conceptual-proxy.png&quot;&gt;&lt;img src=&quot;/images/2024-09-ha-tunnels/conceptual-proxy.png&quot; alt=&quot;Conceptual diagram of Proxy Protocol&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Until the October release of inlets, Proxy Protocol was supported when the inlets tunnel server was run directly on an internet-facing server via the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--proxy-protocol&lt;/code&gt; flag. That meant the receiving end of the tunnel, the “upstream” would get a Proxy Protocol header and would need to be configured to understand it.&lt;/p&gt;

&lt;p&gt;Now, with the new release, Proxy Protocol is now supported when the inlets tunnel server is behind a load balancer by setting the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--lb-proxy-protocol&lt;/code&gt; flag, in addition to the existing flag.&lt;/p&gt;

&lt;h2 id=&quot;the-conceptual-design&quot;&gt;The conceptual design&lt;/h2&gt;

&lt;p&gt;Here’s a diagram of the design we’re going to implement:&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;/images/2024-09-ha-tunnels/conceptual-ha.png&quot;&gt;&lt;img src=&quot;/images/2024-09-ha-tunnels/conceptual-ha.png&quot; alt=&quot;Conceptual diagram&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An inlets tunnel server has two parts:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The control-plane, usually served on port 8123. Clients connect here to establish a tunnel.&lt;/li&gt;
  &lt;li&gt;The data-plane, these ports can vary, but in our example are 80 and 443, to expose a reverse proxy like Nginx.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The control-plane must not be set behind a load balancer, because if that were the case, both clients could connect to the same server, negating the HA design.&lt;/p&gt;

&lt;p&gt;The data-plane will sit behind the load-balancer, and its health checks will ensure that if either of the tunnels goes down, or either of the VMs crashes, the load balancer will stop sending traffic to it.&lt;/p&gt;

&lt;p&gt;So, to replicate the architecture in the diagram, you’ll need to:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Deploy two VMs with the inlets-pro tcp server installed.&lt;/li&gt;
  &lt;li&gt;Set up a cloud load balancer to route traffic to the two VMs, on ports 80 and 443.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the private service that is being exposed over the tunnel supports Proxy Protocol, then this can be used to obtain real client IP addresses. Most proxies and reverse proxies do support the protocol, but if you don’t want to configure this, or don’t need real client IPs to be sent to the private service, you can ignore all references to Proxy Protocol in this post.&lt;/p&gt;

&lt;p&gt;Now, ensure you enable Proxy Protocol support on the load balancer itself. Some clouds allow you to specify which version of Proxy Protocol you want to use, if possible, pick v2. DigitalOcean only supports &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;v1&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Whichever you choose, you will need to configure your inlets-pro tcp server process to use the same via the new &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--lb-proxy-protocol&lt;/code&gt; flag. Valid options are: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&quot;&quot;&lt;/code&gt; (off), &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&quot;v1&quot;&lt;/code&gt;, or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&quot;v2&quot;&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Then you’ll deploy two inlets-pro tcp clients in the private network, pointing at your upstream, i.e. an nginx reverse proxy. Each client must point to its matching server, and not to the load-balancer.&lt;/p&gt;

&lt;h2 id=&quot;fail-over-not-load-balancing&quot;&gt;Fail-over not load-balancing&lt;/h2&gt;

&lt;p&gt;You can actually connect more than one inlets-pro tcp client to a single inlets-pro tcp server, for the sake of load-balancing, and increasing the number of connections that can be handled.&lt;/p&gt;

&lt;p&gt;However, load-balancing is not fail-over. If the VM hosting the inlets-pro tcp server fails or crashes, then you won’t be able to serve any traffic.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;/images/2024-09-ha-tunnels/conceptual-ha-failover.png&quot;&gt;&lt;img src=&quot;/images/2024-09-ha-tunnels/conceptual-ha-failover.png&quot; alt=&quot;Fail-over in practice&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the diagram above, we can see that the VM with the private IP 10.0.0.3 failed, and is not reachable by the load balancer. It will mark this endpoint as unhealthy, and stop sending traffic to it.&lt;/p&gt;

&lt;p&gt;The other VM with IP 10.0.0.2 is still healthy, and will continue to serve traffic.&lt;/p&gt;

&lt;h2 id=&quot;wrapping-up&quot;&gt;Wrapping up&lt;/h2&gt;

&lt;p&gt;In this post, we’ve looked at how to create a Highly Available inlets tunnel with a cloud Load Balancer, and Proxy Protocol support, to get the original source IP addresses.&lt;/p&gt;

&lt;p&gt;If you want to keep your configuration simple, whilst still having a HA setup, you can forgo the use of Proxy Protocol, however I tend to recommend it for debugging and security purposes. Knowing the source IP of your users, or the IP of the client that is connecting to your service can give you insights on where your traffic is coming from, and can be used to block or allow certain IP ranges.&lt;/p&gt;

&lt;p&gt;If you’re happy with your current setup with inlets, then there’s nothing you need to change. But one thing you may like to try out, instead of a HA setup, is running a second inlets-pro tunnel client, connected to the same server. Each connection is load-balanced, meaning you can handle additional traffic and more connections.&lt;/p&gt;

&lt;p&gt;If you’re exposing a Kubernetes Ingress Controller, here are the instructions for setting it up to expect a Proxy Protocol header.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://inlets.dev/blog/2022/09/02/real-client-ips-with-proxy-protocol.html&quot;&gt;Traefik, K3s and Proxy Protocol&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-proxy-protocol&quot;&gt;Nginx and Proxy Protocol&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both approaches involve editing either a ConfigMap, or the flags passed to the binary in the Kubernetes Deployment.&lt;/p&gt;</content><author><name>Alex Ellis</name></author><category term="tutorial" /><category term="ha" /><category term="architecture" /><category term="reference" /><category term="load-balancer" /><summary type="html">We look at Highly Available inlets tunnels, how to integrate with Proxy Protocol to get original source IP addresses, and how to configure a cloud load balancer.</summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://inlets.dev/images/2024-09-ha-tunnels/background.png" /><media:content medium="image" url="https://inlets.dev/images/2024-09-ha-tunnels/background.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Expose your local Kubernetes Ingress Controller via Hetzner Cloud</title><link href="https://inlets.dev/blog/2024/08/15/inlets-operator-with-hetzner.html" rel="alternate" type="text/html" title="Expose your local Kubernetes Ingress Controller via Hetzner Cloud" /><published>2024-08-15T00:00:00+00:00</published><updated>2024-08-15T00:00:00+00:00</updated><id>https://inlets.dev/blog/2024/08/15/inlets-operator-with-hetzner</id><content type="html" xml:base="https://inlets.dev/blog/2024/08/15/inlets-operator-with-hetzner.html">&lt;p&gt;There are two ways to configure inlets to expose an Ingress Controller or Istio Gateway to the public Internet, both are very similar; only the lifecycle of the tunnel differs.&lt;/p&gt;

&lt;p&gt;For teams that are new to inlets, and set up most of their configuration with clicking buttons, installing Helm charts and applying YAML from their workstation or a CI pipeline, then the inlets-operator keeps things simple. Whenever you install the inlets-operator, it searches for LoadBalancer resources and provisions VMs for them with the inlets-pro TCP server preinstalled. It then creates a Deployment in the same namespace with an inlets TCP client pointing to the remote VM, and everything just works.&lt;/p&gt;

&lt;p&gt;The downside to the &lt;a href=&quot;https://github.com/inlets/inlets-operator&quot;&gt;inlets-operator&lt;/a&gt; is that if you delete the exposed resource, i.e. &lt;a href=&quot;https://github.com/kubernetes/ingress-nginx&quot;&gt;ingress-nginx&lt;/a&gt;, then the tunnel will be deleted too, and when recreated it will have a different IP address. That means you will need to update your DNS records accordingly.&lt;/p&gt;

&lt;p&gt;What if you’re heavily invested in GitOps, and regularly delete and re-create your cluster’s configuration? Then you may want a more stable IP address and set of DNS records, in that case, you can create the VM for the inlets tunnel server manually or semi-automatically with Terraform, Pulumi or our own provisioning CLI called &lt;a href=&quot;https://docs.inlets.dev/reference/inletsctl/&quot;&gt;inletsctl&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With the inlets-operator, you need to pick a region and supported provider such as AWS EC2, &lt;a href=&quot;https://m.do.co/c/2962aa9e56a1&quot;&gt;DigitalOcean&lt;/a&gt;, or &lt;a href=&quot;https://www.hetzner.com/cloud/&quot;&gt;Hetzner Cloud&lt;/a&gt; and provide those options via the Helm chart. For a manual tunnel server, you can use any tooling or cloud/VPS provider you wish. We’ll be using Hetzner Cloud in this example, which is particularly good value and fast to provision.&lt;/p&gt;

&lt;h2 id=&quot;a-quick-video-demo-of-the-operator&quot;&gt;A quick video demo of the operator&lt;/h2&gt;

&lt;p&gt;In this animation by &lt;a href=&quot;https://iximiuz.com/en/posts/kubernetes-operator-pattern&quot;&gt;Ivan Velichko&lt;/a&gt;, you see the operator in action. As it detects a new Service of type LoadBalancer, provisions a VM in the cloud, then updates the Service with the IP address of the VM.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://iximiuz.com/en/posts/kubernetes-operator-pattern&quot;&gt;&lt;img src=&quot;https://iximiuz.com/kubernetes-operator-pattern/kube-operator-example-opt.gif&quot; alt=&quot;Demo GIF&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2 id=&quot;getting-an-lb-in-about-30-seconds-with-hetzner-cloud&quot;&gt;Getting an LB in about 30 seconds with Hetzner Cloud&lt;/h2&gt;

&lt;blockquote class=&quot;twitter-tweet&quot; data-conversation=&quot;none&quot;&gt;&lt;p lang=&quot;en&quot; dir=&quot;ltr&quot;&gt;About 30s from creating a Service with type LoadBalancer, to a fully working endpoint. And I&amp;#39;m sat in a cafe running KinD with WiFi. &lt;a href=&quot;https://t.co/zUV9US7OM0&quot;&gt;pic.twitter.com/zUV9US7OM0&lt;/a&gt;&lt;/p&gt;&amp;mdash; Alex Ellis (@alexellisuk) &lt;a href=&quot;https://twitter.com/alexellisuk/status/1800863973581136340?ref_src=twsrc%5Etfw&quot;&gt;June 12, 2024&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async=&quot;&quot; src=&quot;https://platform.twitter.com/widgets.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;

&lt;p&gt;I work from home, and once per week, if work allows, I try to get out to a coffee shop and to work on a blog post there, in different surroundings. On this occasion I decided to update the inlets-operator’s support for Hetzner Cloud, and to give it a quick test. You can see from the screenshot that a &lt;a href=&quot;https://kind.sigs.k8s.io/&quot;&gt;KinD&lt;/a&gt; cluster running on my MacBook Air M2 was able to get a public IP address in about 30 seconds flat.&lt;/p&gt;

&lt;p&gt;So what’s needed? First you’ll need an account with Hetzner Cloud, bear in mind Hetzner Robot is the dedicated server offering and needs a different login and account.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://accounts.hetzner.com/login&quot;&gt;Log into Hetzner Cloud&lt;/a&gt; and enter your Default project.&lt;/li&gt;
  &lt;li&gt;Click “Security” then the “API tokens” tab&lt;/li&gt;
  &lt;li&gt;Click “Generate API token” and name it “inlets-operator” and grant Read/Write access.&lt;/li&gt;
  &lt;li&gt;Click “Click to show” then copy the text and save it as &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;~/.hetzner-cloud-token&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Now determine the available regions by clicking Add Server, don’t actually add a server but use the screen to copy the code of the region you want i.e. Helsinki (eu-central) or Ashburn, VA (us-east).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you don’t have an inlets license yet, obtain one at &lt;a href=&quot;https://inlets.dev/pricing&quot;&gt;inlets.dev&lt;/a&gt; and save it as &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;~/.inlets/LICENSE&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Formulate the Helm install command&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Create a namespace for inlets-operator&lt;/span&gt;
kubectl create namespace inlets

&lt;span class=&quot;c&quot;&gt;# Create a secret to store the service account key file&lt;/span&gt;
kubectl create secret generic inlets-access-key &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--namespace&lt;/span&gt; inlets &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--from-file&lt;/span&gt; inlets-access-key&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$HOME&lt;/span&gt;/.hetzner-cloud-token

&lt;span class=&quot;c&quot;&gt;# Create a secret to store the inlets-pro license&lt;/span&gt;
kubectl create secret generic &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--namespace&lt;/span&gt; inlets &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  inlets-license &lt;span class=&quot;nt&quot;&gt;--from-file&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;license&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$HOME&lt;/span&gt;/.inlets/LICENSE

&lt;span class=&quot;c&quot;&gt;# Add and update the inlets-operator helm repo&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# You only need to do this once.&lt;/span&gt;
helm repo add inlets https://inlets.github.io/inlets-operator/

&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;REGION&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;eu-central
&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;PROVIDER&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;hetzner

&lt;span class=&quot;c&quot;&gt;# Update the Helm repository and perform an installation&lt;/span&gt;
helm repo update &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  helm upgrade inlets-operator &lt;span class=&quot;nt&quot;&gt;--install&lt;/span&gt; inlets/inlets-operator &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--namespace&lt;/span&gt; inlets &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--set&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;provider&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$PROVIDER&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--set&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;region&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$REGION&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;If you don’t have an Ingress Controller already installed and configured in your cluster, then you can add one with arkade:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;curl &lt;span class=&quot;nt&quot;&gt;-sLS&lt;/span&gt; https://get.arkade.dev | &lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;sh

arkade &lt;span class=&quot;nb&quot;&gt;install &lt;/span&gt;ingress-nginx
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;That will create a service of type LoadBalancer in the default namespace, watch it and you’ll see an IP appear for it:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl get service &lt;span class=&quot;nt&quot;&gt;--watch&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;How quickly did your public IP appear?&lt;/p&gt;

&lt;p&gt;There’s also a Custom Resource Definition (CRD) for the inlets-operator, you can view it with:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl get tunnels &lt;span class=&quot;nt&quot;&gt;-o&lt;/span&gt; wide
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Access your ingress-nginx service with the IP address shown in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;EXTERNAL-IP&lt;/code&gt; column, and you’ll see the default Nginx welcome page.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;curl &lt;span class=&quot;nt&quot;&gt;-i&lt;/span&gt; http://&amp;lt;EXTERNAL-IP&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;If you delete the tunnel CR, you’ll see it re-created with a new IP in a short period of time:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl delete tunnel ingress-nginx
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Then watch either the service or tunnel object again:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl get tunnels &lt;span class=&quot;nt&quot;&gt;-o&lt;/span&gt; wide &lt;span class=&quot;nt&quot;&gt;--watch&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;wrapping-up&quot;&gt;Wrapping up&lt;/h2&gt;

&lt;p&gt;Using a single tunnel and a single license, you can expose dozens, if not hundreds of different websites through your Ingress Controller, all running within your private or on-premises Kubernetes cluster. The inlets-operator is a great way to get started with inlets, and it’s also a great way to expose your Ingress Controller to the public Internet.&lt;/p&gt;

&lt;p&gt;The inlets-operator works with &lt;a href=&quot;https://docs.inlets.dev/reference/inlets-operator&quot;&gt;different clouds&lt;/a&gt; and can expose any TCP LoadBalancer, not just Ingress Controllers and Istio.&lt;/p&gt;

&lt;p&gt;Bear in mind that the tunnel IP and DNS records will be tied to the lifecycle of your LoadBalancer services, so if you delete them, the VMs will be deleted too, and if you re-create them, then they’ll be re-created with a new IP address. For that reason, you may want to &lt;a href=&quot;https://docs.inlets.dev/tutorial/manual-tcp-server/&quot;&gt;create the tunnel servers manually&lt;/a&gt;, or separately from the inlets-operator.&lt;/p&gt;

&lt;p&gt;The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;inlets-pro tcp server --generate=systemd&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;inlets-pro tcp client --generate=k8s_yaml&lt;/code&gt; are two utility commands to make it easier to set up both parts of the tunnel without needing the operator.&lt;/p&gt;

&lt;p&gt;The operator will also need credentials to provision and clean up VMs, that’s another thing to consider when deciding which approach to use.&lt;/p&gt;

&lt;p&gt;The code for the inlets-operator is open source under the MIT license and available on GitHub: &lt;a href=&quot;https://github.com/inlets/inlets-operator/&quot;&gt;inlets/inlets-operator&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&quot;watch-a-video-walk-through&quot;&gt;Watch a video walk-through&lt;/h3&gt;

&lt;p&gt;I recorded a video walk-through of the blog post, so you can watch it back and see the steps in action.&lt;/p&gt;

&lt;div style=&quot;width: ; margin:0 auto;&quot;&gt;
    
    &lt;div class=&quot;ytcontainer&quot;&gt;
        &lt;iframe width=&quot;560&quot; height=&quot;315&quot; class=&quot;yt&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot; allowfullscreen=&quot;&quot; src=&quot;https://www.youtube.com/embed/Bk98zZixJL0&quot;&gt;&lt;/iframe&gt;
    &lt;/div&gt;
&lt;/div&gt;

&lt;p&gt;When you &lt;a href=&quot;https://inlets.dev/pricing&quot;&gt;sign up for a subscription&lt;/a&gt;, you’ll get complimentary access to a Discord community to talk with other users and the inlets team.&lt;/p&gt;</content><author><name>Alex Ellis</name></author><category term="blog" /><category term="kubernetes" /><category term="ingress" /><category term="tunnels" /><category term="private" /><category term="vpc" /><category term="hetzner" /><summary type="html">There are two ways to configure inlets to expose an Ingress Controller or Istio Gateway to the public Internet, both are very similar; only the lifecycle of the tunnel differs.</summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://inlets.dev/images/2024-08-15-kubernetes-ingress-hetzner/background.png" /><media:content medium="image" url="https://inlets.dev/images/2024-08-15-kubernetes-ingress-hetzner/background.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Access local Ollama models from a cloud Kubernetes Cluster</title><link href="https://inlets.dev/blog/2024/08/09/local-ollama-tunnel-k3s.html" rel="alternate" type="text/html" title="Access local Ollama models from a cloud Kubernetes Cluster" /><published>2024-08-09T00:00:00+00:00</published><updated>2024-08-09T00:00:00+00:00</updated><id>https://inlets.dev/blog/2024/08/09/local-ollama-tunnel-k3s</id><content type="html" xml:base="https://inlets.dev/blog/2024/08/09/local-ollama-tunnel-k3s.html">&lt;p&gt;Renting a GPU in the cloud, especially with a bare-metal host can be expensive, and even if the hourly rate looks reasonable, over the course of a year, it can really add up. Many of us have a server or workstation at home with a GPU that can be used for serving models with an open source project like &lt;a href=&quot;https://ollama.com/&quot;&gt;Ollama&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We can combine a cloud-hosted, low cost, elastic Kubernetes cluster with a local single-node K3s cluster with a GPU, and in that way get the best of both worlds.&lt;/p&gt;

&lt;p&gt;One option may be to try and join the node on your home network as a worker in the cloud-hosted cluster, but I don’t think that makes sense:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why you shouldn’t use a VPN to join local hosts to your cloud Kubernetes cluster&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Kubernetes is designed for homogenous networking, where latency to each host is low and predictable, it is not built to work over WANs&lt;/li&gt;
  &lt;li&gt;Every time the API is accessed, or the scheduler is used, it is going to have to reach out to your home network over the Internet which will introduce unreasonable latency&lt;/li&gt;
  &lt;li&gt;A large amount of bandwidth is required between nodes, this has to go over the Internet which counts as egress traffic, and is billable&lt;/li&gt;
  &lt;li&gt;The node in your home gets completely exposed to the cloud-based cluster, there is no security boundary or granularity&lt;/li&gt;
  &lt;li&gt;You’ll need complex scheduling YAML including taints, tolerations and affinity rules to ensure that the node in your home and the ones in the cloud get the right workloads&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So whilst it might look cool to run “kubectl get nodes” and see one that’s in your home, and a bunch that are on the cloud, there are more reasons against it than for it.&lt;/p&gt;

&lt;blockquote class=&quot;twitter-tweet&quot;&gt;&lt;p lang=&quot;en&quot; dir=&quot;ltr&quot;&gt;Nothing to see here. Only 65GB of bandwidth consumption from an idle k3s cluster running embedded etcd.&lt;br /&gt;&lt;br /&gt;Anyone else noticed similar? &lt;a href=&quot;https://t.co/Q4BVIIHJ7n&quot;&gt;pic.twitter.com/Q4BVIIHJ7n&lt;/a&gt;&lt;/p&gt;&amp;mdash; Alex Ellis (@alexellisuk) &lt;a href=&quot;https://twitter.com/alexellisuk/status/1583467517670195200?ref_src=twsrc%5Etfw&quot;&gt;October 21, 2022&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async=&quot;&quot; src=&quot;https://platform.twitter.com/widgets.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;

&lt;p&gt;&lt;strong&gt;So how is inlets different?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Inlets is well known for exposing local HTTP and TCP services on the Internet, but it can also be used for private tunnels.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;There will be minimal bandwidth used, as the model is accessed over the tunnel&lt;/li&gt;
  &lt;li&gt;The risk to your home network is minimal, as only the specific port and endpoint will be accessible remotely&lt;/li&gt;
  &lt;li&gt;It’s trivial to access the tunneled service as a HTTP endpoint with a normal ClusterIP&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So rather than having to enroll machines in your local or home network to be fully part of a cloud-hosted cluster, you only tunnel what you need. It saves on bandwidth, tightens up security, and is much easier to manage.&lt;/p&gt;

&lt;p&gt;And what if ollama isn’t suitable for your use-case? You can use the same technique by creating your own REST API with Flask, FastAPI, Express.js, Go, etc, and expose that over the tunnel instead.&lt;/p&gt;

&lt;p&gt;There are common patterns for accessing remote APIs in different networks such as using queues or a REST API, however building heterogeneous clusters with high latency is not one of those.&lt;/p&gt;

&lt;p&gt;If you need more dynamic workloads and don’t want to build your own REST API to manage Kubernetes workloads, then consider the &lt;a href=&quot;https://openfaas.com&quot;&gt;OpenFaaS project&lt;/a&gt; provides a HTTP API and built-in asynchronous queue system that can be used to run batch jobs and long-running tasks, it can also package Ollama as a function, and is easy to use over a HTTP tunnel. For example: &lt;a href=&quot;https://www.openfaas.com/blog/transcribe-audio-with-openai-whisper/&quot;&gt;How to transcribe audio with OpenAI Whisper and OpenFaaS&lt;/a&gt; or &lt;a href=&quot;https://www.openfaas.com/blog/openai-streaming-responses/&quot;&gt;Stream OpenAI responses from functions using Server Sent Events&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;a-quick-look-at-the-setup&quot;&gt;A quick look at the setup&lt;/h2&gt;

&lt;p&gt;&lt;img src=&quot;/images/2024-08-private-k3s-tunnel/conceptual.png&quot; alt=&quot;The setup&quot; /&gt;&lt;/p&gt;
&lt;blockquote&gt;
  &lt;p&gt;The setup with two independent Kubernetes clusters, one running locally with a GPU and ollama, the other in the cloud running your product and making HTTP requests over the tunnel.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You’ll need to do the following:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;Setup a local machine where you’ve installed K3s, and setup nvidia-containerd.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Create a Kubernetes cluster using a managed Kubernetes service like DigitalOcean, AWS, or Google Cloud, or you can setup a self-hosted cluster using a tool like &lt;a href=&quot;https://k3sup.dev&quot;&gt;K3sup&lt;/a&gt; on a set of VMs.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Package and deploy Ollama along with a model as a container image, then deploy it to your local K3s cluster&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Create a HTTPS tunnel server using inlets on the public Kubernetes cluster&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Create a HTTPS tunnel client using inlets on the local K3s cluster&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Finally, we will launch &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;curl&lt;/code&gt; in a Pod in the public cluster and invoke the model served by Ollama over the tunnel&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Then it’ll be over to you to integrate the model into your applications, or to develop your own UI or API to expose to your users.&lt;/p&gt;

&lt;h2 id=&quot;1-setup-a-local-machine-with-k3s-and-nvidia-containerd&quot;&gt;1. Setup a local machine with K3s and nvidia-containerd&lt;/h2&gt;

&lt;p&gt;Over on the &lt;a href=&quot;https://www.openfaas.com/blog/transcribe-audio-with-openai-whisper&quot;&gt;OpenFaaS blog&lt;/a&gt; under the heading “Prepare a k3s with NVIDIA container runtime support”, you’ll find full instructions for setting up a single-node K3s cluster on a machine with an Nvidia GPU.&lt;/p&gt;

&lt;p&gt;If you do not have an Nvidia GPU available, or perhaps just want to try out the tunnelling example without a K3s cluster, you can create a local cluster inside Docker using Kubernetes in Docker (KinD) or Minikube.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kind create cluster &lt;span class=&quot;nt&quot;&gt;--name&lt;/span&gt; ollama-local
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;To connect to the two clusters use the: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl config use-context&lt;/code&gt; command, or the popular helper &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectx NAME&lt;/code&gt; available via &lt;a href=&quot;https://arkade.dev&quot;&gt;arkade&lt;/a&gt; with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;arkade get kubectx&lt;/code&gt;.&lt;/p&gt;

&lt;h2 id=&quot;2-create-a-cloud-hosted-kubernetes-cluster&quot;&gt;2. Create a cloud-hosted Kubernetes cluster&lt;/h2&gt;

&lt;p&gt;This step is self-explanatory, for ease of use, you can setup a cloud-hosted Kubernetes cluster using a managed Kubernetes service like DigitalOcean, AWS, or Google Cloud. With a managed Kubernetes offering, load-balancers, storage, networking, and updates are managed by someone else, so it’s a good way to get started.&lt;/p&gt;

&lt;p&gt;If you already have a self-hosted cluster on a set of VMs, or want to manage Kubernetes yourself, then &lt;a href=&quot;https://k3sup.dev&quot;&gt;K3sup&lt;/a&gt; provides a quick and easy way to create a highly-available cluster.&lt;/p&gt;

&lt;h2 id=&quot;3-package-and-deploy-ollama&quot;&gt;3. Package and deploy Ollama&lt;/h2&gt;

&lt;p&gt;Ollama is a wrapper that can be used to serve a REST API for interference on various machine learning models. You can package and deploy Ollama along with a model as a container image, then deploy it to your local K3s cluster.&lt;/p&gt;

&lt;p&gt;Here’s an example Dockerfile for packaging Ollama:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-Dockerfile&quot;&gt;FROM ollama/ollama:latest

RUN apt update &amp;amp;&amp;amp; apt install -yq curl

RUN mkdir -p /app/models
RUN ollama serve &amp;amp; sleep 2 &amp;amp;&amp;amp; curl -i http://127.0.0.1:11434 ollama pull phi3

EXPOSE 11434
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Two other options for making the model available to Ollama are:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;If you do not wish to package the model into a container image and push it into a remote registry, you can use an init container, or a start-up script as the entrypoint and perform that operation at runtime.&lt;/li&gt;
  &lt;li&gt;Another option is to download the model locally, then to copy it into a volume within the local Kubernetes cluster, then you can mount that volume into the Ollama container.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Build the image and publish it:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OWNER&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;docker.io/alexellis2&quot;&lt;/span&gt;
docker build &lt;span class=&quot;nt&quot;&gt;-t&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$OWNER&lt;/span&gt;/ollama-phi3:0.1.0 &lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;

docker push &lt;span class=&quot;nv&quot;&gt;$OWNER&lt;/span&gt;/ollama-phi3:0.1.0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Now write a Kubernetes Deployment manifest and accompanying Service for Ollama:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;s&quot;&gt;export OWNER=&quot;docker.io/alexellis2&quot;&lt;/span&gt;

&lt;span class=&quot;s&quot;&gt;cat &amp;lt;&amp;lt;EOF &amp;gt; ollama-phi3.yaml&lt;/span&gt;
&lt;span class=&quot;nn&quot;&gt;---&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;apps/v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Deployment&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ollama-phi3&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;replicas&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;1&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;selector&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;matchLabels&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;app&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ollama-phi3&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;template&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;labels&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;app&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ollama-phi3&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;containers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ollama-phi3&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;$OWNER/ollama-phi3:0.1.0&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;ports&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;containerPort&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;11434&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;command&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;ollama&quot;&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;serve&quot;&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;]&lt;/span&gt;
&lt;span class=&quot;nn&quot;&gt;---&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Service&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ollama-phi3&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;selector&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;app&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ollama-phi3&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;ports&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;protocol&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;TCP&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;port&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;11434&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;targetPort&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;11434&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;EOF&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The above is designed to work with a CPU, and can be extended to support a GPU by adding the necessary device and runtime class.&lt;/p&gt;

&lt;p&gt;Deploy the manifest to your local K3s cluster:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl apply &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; ollama-phi3.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You can check to see when the Pod has been downloaded and started:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl get deploy/ollama-phi3 &lt;span class=&quot;nt&quot;&gt;-o&lt;/span&gt; wide &lt;span class=&quot;nt&quot;&gt;--watch&lt;/span&gt;
kubectl logs deploy/ollama-phi3
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Check that you can invoke the model from within your local cluster.&lt;/p&gt;

&lt;p&gt;Run an Alpine Pod, install curl and jq, then try accessing the Ollama API to see if it’s up:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl run &lt;span class=&quot;nt&quot;&gt;-it&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--rm&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--restart&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;Never &lt;span class=&quot;nt&quot;&gt;--image&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;alpine:latest ollama-phi3-test &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; /bin/sh

&lt;span class=&quot;c&quot;&gt;# apk add --no-cache curl jq&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# curl -i http://ollama-phi3:11434/&lt;/span&gt;

Ollama is running
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Next, try an inference:&lt;/p&gt;
&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# curl http://ollama-phi3:11434/api/generate -d '{&lt;/span&gt;
    &lt;span class=&quot;s2&quot;&gt;&quot;model&quot;&lt;/span&gt;: &lt;span class=&quot;s2&quot;&gt;&quot;phi3&quot;&lt;/span&gt;,
    &lt;span class=&quot;s2&quot;&gt;&quot;stream&quot;&lt;/span&gt;: &lt;span class=&quot;nb&quot;&gt;true&lt;/span&gt;,
    &lt;span class=&quot;s2&quot;&gt;&quot;prompt&quot;&lt;/span&gt;:&lt;span class=&quot;s2&quot;&gt;&quot;What is the advantage of tunnelling a single TCP host over exposing your whole local network to an Internet-connected Kubernetes cluster?&quot;&lt;/span&gt;
    &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;' | jq
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The above configuration may be running on CPU, in which case you will need to wait a few seconds whilst the model runs the query. It took 29 seconds to get a response on my machine, when you have the GPU enabled, the response time will be much faster.&lt;/p&gt;

&lt;p&gt;If you’d like to get a streaming response, and see data as it comes in, you can set &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;stream&lt;/code&gt; to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;true&lt;/code&gt; in the request, and remove &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;| jq&lt;/code&gt; from the command.&lt;/p&gt;

&lt;h2 id=&quot;4-create-a-https-tunnel-server-using-inlets&quot;&gt;4. Create a HTTPS tunnel server using inlets&lt;/h2&gt;

&lt;p&gt;Now you can generate an access token for inlets, and then deploy the inlets server to your cloud-hosted Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;The inlets control-plane needs to be available on the Internet, you can do this via a LoadBalancer or through Kubernetes Ingress.&lt;/p&gt;

&lt;p&gt;I’ll show you how to use a LoadBalancer, because it’s a bit more concise:&lt;/p&gt;

&lt;p&gt;On the public cluster, provision a LoadBalancer service along with a ClusterIP for the data-plane.&lt;/p&gt;

&lt;p&gt;The control-plane will be used by the tunnel client in the local cluster, and the data-plane will only be available within the remote cluster.&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;s&quot;&gt;cat &amp;lt;&amp;lt;EOF &amp;gt; ollama-tunnel-server-svc.yaml&lt;/span&gt;
&lt;span class=&quot;nn&quot;&gt;---&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Service&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ollama-tunnel-server-control&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;type&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;LoadBalancer&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;ports&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;port&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;8123&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;targetPort&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;8123&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;selector&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;app&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ollama-tunnel-server&lt;/span&gt;
&lt;span class=&quot;nn&quot;&gt;---&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Service&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ollama-tunnel-server-data&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;type&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ClusterIP&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;ports&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;port&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;8000&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;targetPort&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;8000&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;selector&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;app&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ollama-tunnel-server&lt;/span&gt;
&lt;span class=&quot;nn&quot;&gt;---&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;EOF&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Apply the manifest on the remote cluster: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl apply -f ollama-tunnel-server-svc.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, obtain the public IP address of the LoadBalancer by monitoring the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;EXTERNAL-IP&lt;/code&gt; field with the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl get svc -w -o wide ollama-tunnel-server-control&lt;/code&gt; command.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;kubectl get svc &lt;span class=&quot;nt&quot;&gt;-w&lt;/span&gt;
NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP      PORT&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;S&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;          AGE
kubernetes             ClusterIP      10.245.0.1      &amp;lt;none&amp;gt;           443/TCP          12m
ollama-tunnel-server   LoadBalancer   10.245.26.4     &amp;lt;pending&amp;gt;        8123:32458/TCP   8s
ollama-tunnel-server   LoadBalancer   10.245.26.4     157.245.29.186   8123:32458/TCP   2m30s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Now input the value into the following command run on your workstation:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;EXTERNAL_IP&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;157.245.29.186&quot;&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;TOKEN&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;openssl rand &lt;span class=&quot;nt&quot;&gt;-base64&lt;/span&gt; 32&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;

&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$TOKEN&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; inlets-token.txt

inlets-pro http server &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--generate&lt;/span&gt; k8s_yaml &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--generate-name&lt;/span&gt; ollama-tunnel-server &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--generate-version&lt;/span&gt; 0.9.33 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--auto-tls&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--auto-tls-san&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$EXTERNAL_IP&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--token&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$TOKEN&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; ollama-tunnel-server-deploy.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Apply the generated manifest to the remote cluster:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl apply &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; ollama-tunnel-server-deploy.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You can check that the tunnel server has started up properly with:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl logs deploy/ollama-tunnel-server
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;At this stage, you should have the following on your public cluster:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;$ kubectl get service,deploy
NAME                                   TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)          AGE
service/kubernetes                     ClusterIP      10.245.0.1      &amp;lt;none&amp;gt;           443/TCP          110m
service/ollama-tunnel-server-control   LoadBalancer   10.245.7.92     157.245.29.186   8123:31581/TCP   76m
service/ollama-tunnel-server-data      ClusterIP      10.245.40.138   &amp;lt;none&amp;gt;           8000/TCP         76m

NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ollama-tunnel-server   1/1     1            1           88m
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;5-create-a-https-tunnel-client-using-inlets&quot;&gt;5. Create a HTTPS tunnel client using inlets&lt;/h2&gt;

&lt;p&gt;Now you can create a tunnel client on your local K3s cluster:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;EXTERNAL_IP&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&quot;&lt;/span&gt;

inlets-pro http client &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--generate&lt;/span&gt; k8s_yaml &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--generate-name&lt;/span&gt; ollama-tunnel-client &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--url&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;wss://&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$EXTERNAL_IP&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;:8123&quot;&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--token-file&lt;/span&gt; inlets-token.txt &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--upstream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;http://ollama-phi3:11434 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; ollama-tunnel-client-deploy.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The upstream uses the ClusterIP of the Ollama service within the local cluster, where the port is 11434.&lt;/p&gt;

&lt;p&gt;The above will generate three Kubernetes objects:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;A Deployment for the inlets client&lt;/li&gt;
  &lt;li&gt;A Secret for the control-plane token&lt;/li&gt;
  &lt;li&gt;A Secret for your license key for inlets&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Apply the generated manifest to the local cluster:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl apply &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; ollama-tunnel-client-deploy.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Check that the tunnel was able to connect:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl logs deploy/ollama-tunnel-client

inlets-pro HTTP client. Version: 0.9.32
Copyright OpenFaaS Ltd 2024.
2024/08/09 11:49:02 Licensed to: alex &amp;lt;alex@openfaas.com&amp;gt;, expires: 47 day&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;s&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
Upstream:  &lt;span class=&quot;o&quot;&gt;=&amp;gt;&lt;/span&gt; http://ollama-phi3:11434
&lt;span class=&quot;nb&quot;&gt;time&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;2024/08/09 11:49:02&quot;&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;level&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;info &lt;span class=&quot;nv&quot;&gt;msg&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;Connecting to proxy&quot;&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;url&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;wss://157.245.29.186:8123/connect&quot;&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;time&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;2024/08/09 11:49:03&quot;&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;level&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;info &lt;span class=&quot;nv&quot;&gt;msg&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;Connection established&quot;&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;client_id&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;91612bca9c0f41c4a313424db9b6a0c7
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;There will be no way to access the tunneled Ollama service from the Internet, this is by design, only the control-plane is made available to the client on port 8123.&lt;/p&gt;

&lt;p&gt;Your local Kubernetes cluster should have the following:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;kubectl get svc,deploy
NAME                  TYPE        CLUSTER-IP    EXTERNAL-IP   PORT&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;S&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;     AGE
service/kubernetes    ClusterIP   10.96.0.1     &amp;lt;none&amp;gt;        443/TCP     110m
service/ollama-phi3   ClusterIP   10.96.128.9   &amp;lt;none&amp;gt;        11434/TCP   106m

NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ollama-phi3            1/1     1            1           106m
deployment.apps/ollama-tunnel-client   1/1     1            1           83m
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;6-invoke-the-model-from-the-public-cluster&quot;&gt;6. Invoke the model from the public cluster&lt;/h2&gt;

&lt;p&gt;Switch back to the Kubernetes cluster on the public cloud.&lt;/p&gt;

&lt;p&gt;Now you can run a Pod in the public cluster and invoke the model served by Ollama over the tunnel:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl run &lt;span class=&quot;nt&quot;&gt;-it&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--rm&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--restart&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;Never &lt;span class=&quot;nt&quot;&gt;--image&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;alpine:latest ollama-phi3-test &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; /bin/sh

&lt;span class=&quot;c&quot;&gt;# apk add --no-cache curl&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# curl -i http://ollama-tunnel-server-data:8000/api/generate -d '{&lt;/span&gt;
              &lt;span class=&quot;s2&quot;&gt;&quot;model&quot;&lt;/span&gt;: &lt;span class=&quot;s2&quot;&gt;&quot;phi3&quot;&lt;/span&gt;,
              &lt;span class=&quot;s2&quot;&gt;&quot;stream&quot;&lt;/span&gt;: &lt;span class=&quot;nb&quot;&gt;true&lt;/span&gt;,
              &lt;span class=&quot;s2&quot;&gt;&quot;prompt&quot;&lt;/span&gt;:&lt;span class=&quot;s2&quot;&gt;&quot;How can you combine two networks?&quot;&lt;/span&gt;
            &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;'
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Example response:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;HTTP/1.1 200 OK
Content-Type: application/x-ndjson
Date: Fri, 09 Aug 2024 11:56:19 GMT
Transfer-Encoding: chunked

{&quot;model&quot;:&quot;phi3&quot;,&quot;created_at&quot;:&quot;2024-08-09T11:56:19.486043693Z&quot;,&quot;response&quot;:&quot;Com&quot;,&quot;done&quot;:false}
{&quot;model&quot;:&quot;phi3&quot;,&quot;created_at&quot;:&quot;2024-08-09T11:56:19.543130824Z&quot;,&quot;response&quot;:&quot;bin&quot;,&quot;done&quot;:false}
{&quot;model&quot;:&quot;phi3&quot;,&quot;created_at&quot;:&quot;2024-08-09T11:56:19.598975314Z&quot;,&quot;response&quot;:&quot;ing&quot;,&quot;done&quot;:false}
{&quot;model&quot;:&quot;phi3&quot;,&quot;created_at&quot;:&quot;2024-08-09T11:56:19.654731638Z&quot;,&quot;response&quot;:&quot; two&quot;,&quot;done&quot;:false}
{&quot;model&quot;:&quot;phi3&quot;,&quot;created_at&quot;:&quot;2024-08-09T11:56:19.710487681Z&quot;,&quot;response&quot;:&quot; or&quot;,&quot;done&quot;:false}
{&quot;model&quot;:&quot;phi3&quot;,&quot;created_at&quot;:&quot;2024-08-09T11:56:19.766891184Z&quot;,&quot;response&quot;:&quot; more&quot;,&quot;done&quot;:false}
{&quot;model&quot;:&quot;phi3&quot;,&quot;created_at&quot;:&quot;2024-08-09T11:56:19.822626098Z&quot;,&quot;response&quot;:&quot; neural&quot;,&quot;done&quot;:false}
{&quot;model&quot;:&quot;phi3&quot;,&quot;created_at&quot;:&quot;2024-08-09T11:56:19.880506452Z&quot;,&quot;response&quot;:&quot; networks&quot;,&quot;done&quot;:false}
{&quot;model&quot;:&quot;phi3&quot;,&quot;created_at&quot;:&quot;2024-08-09T11:56:19.937516988Z&quot;,&quot;response&quot;:&quot; is&quot;,&quot;done&quot;:false}
{&quot;model&quot;:&quot;phi3&quot;,&quot;created_at&quot;:&quot;2024-08-09T11:56:19.993925621Z&quot;,&quot;response&quot;:&quot; a&quot;,&quot;done&quot;:false}
{&quot;model&quot;:&quot;phi3&quot;,&quot;created_at&quot;:&quot;2024-08-09T11:56:20.049548473Z&quot;,&quot;response&quot;:&quot; technique&quot;,&quot;done&quot;:false}
{&quot;model&quot;:&quot;phi3&quot;,&quot;created_at&quot;:&quot;2024-08-09T11:56:20.106052896Z&quot;,&quot;response&quot;:&quot; commonly&quot;,&quot;done&quot;:false}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Note in this case, the service name for the tunnel is used, along with the default HTTP port for an inlets tunnel which is 8000, instead of the port 11434 that Ollama is listening in the local cluster.&lt;/p&gt;

&lt;p&gt;Anything you deploy within your public Kubernetes cluster can access the model served by Ollama, by making HTTP requests to the data plane of the tunnel server with the address: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;http://ollama-tunnel-server-data:8000&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;See also: &lt;a href=&quot;https://github.com/ollama/ollama/blob/main/docs/api.md&quot;&gt;Ollama REST API&lt;/a&gt;&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;In this post, you learned how to use a private tunnel to make a local GPU-enabled HTTP service like Ollama available in a remote Kubernetes cluster. This can be useful for serving models from a local GPU, or for exposing a service that is not yet ready for the public Internet.&lt;/p&gt;

&lt;p&gt;You can now integrate the model into your applications, or develop your own UI or API to expose to your users using the ClusterIP of the data-plane service.&lt;/p&gt;

&lt;p&gt;We exposed the control-plane for the tunnel server over a cloud Load Balancer, however if you have multiple tunnels, you can use a Kubernetes Ingress Controller instead, and direct traffic to the correct tunnel based on the hostname and an Ingress record. If you take this route, just remove the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--auto-tls-san&lt;/code&gt; flags from the inlets-pro command as they will no longer be needed. You can use cert-manager to terminate TLS instead.&lt;/p&gt;

&lt;p&gt;If you’d like to see a live demo, watch the video below:&lt;/p&gt;

&lt;div style=&quot;width: ; margin:0 auto;&quot;&gt;
    
    &lt;div class=&quot;ytcontainer&quot;&gt;
        &lt;iframe width=&quot;560&quot; height=&quot;315&quot; class=&quot;yt&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot; allowfullscreen=&quot;&quot; src=&quot;https://www.youtube.com/embed/F_2IIxrGurI&quot;&gt;&lt;/iframe&gt;
    &lt;/div&gt;
&lt;/div&gt;

&lt;p&gt;If you enjoyed this post, you can find similar examples in the &lt;a href=&quot;https://docs.inlets.dev/&quot;&gt;inlets docs&lt;/a&gt;, or on the &lt;a href=&quot;https://inlets.dev/blog/&quot;&gt;inlets blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You may also like:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://inlets.dev/blog/2023/02/24/ingress-for-local-kubernetes-clusters.html&quot;&gt;How to Get Ingress for Private Kubernetes Clusters&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://inlets.dev/blog/2022/07/07/access-kubernetes-api-server.html&quot;&gt;Access your local cluster like a managed Kubernetes engine&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content><author><name>Alex Ellis</name></author><category term="blog" /><category term="kubernetes" /><category term="ingress" /><category term="ai" /><category term="ml" /><category term="ollama" /><summary type="html">Renting a GPU in the cloud, especially with a bare-metal host can be expensive, and even if the hourly rate looks reasonable, over the course of a year, it can really add up. Many of us have a server or workstation at home with a GPU that can be used for serving models with an open source project like Ollama.</summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://inlets.dev/images/2024-08-private-k3s-tunnel/background.png" /><media:content medium="image" url="https://inlets.dev/images/2024-08-private-k3s-tunnel/background.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Get Kubernetes Ingress like Magic</title><link href="https://inlets.dev/blog/2024/06/18/magic-kubernetes-ingress.html" rel="alternate" type="text/html" title="Get Kubernetes Ingress like Magic" /><published>2024-06-18T00:00:00+00:00</published><updated>2024-06-18T00:00:00+00:00</updated><id>https://inlets.dev/blog/2024/06/18/magic-kubernetes-ingress</id><content type="html" xml:base="https://inlets.dev/blog/2024/06/18/magic-kubernetes-ingress.html">&lt;p&gt;Learn how to expose Ingress from your Kubernetes cluster like magic without having to setup any additional infrastructure.&lt;/p&gt;

&lt;p&gt;This post will start with a bit of background information on inlets, why we think now is the right time to offer hosted tunnels, how it works, and then we’ll give you a walk-through so you can see if it’s for you.&lt;/p&gt;

&lt;h2 id=&quot;introduction-to-inlets&quot;&gt;Introduction to inlets&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Why did we need inlets in 2019?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I started &lt;a href=&quot;https://inlets.dev/&quot;&gt;Inlets&lt;/a&gt; in early 2019 as an antidote to the frustrating restrictions of the SaaS-style tunnels of the day like Ngrok and Cloudflare tunnels, and manual port-forwarding that exposed your home address to users.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;So rather than having very poor integration with containers, inlets was born at the height of Cloud Native - with a Kubernetes operator, Helm chart, container image, and multi-arch binary for MacOS, Windows and Linux.&lt;/li&gt;
  &lt;li&gt;Rather than having stringent and impractical rate-limits, inlets was designed to be self-hosted meaning you were free of limits.&lt;/li&gt;
  &lt;li&gt;Rather than exposing where you lived via your ISP’s IP address, you got to mask it with a public IP address from a cloud provider.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What were the trade-offs?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So what was the trade-off? You would need to set up a VM somewhere and start the tunnel server on it. To make that as easy as possible, two open-source utilities were created, with support for various cloud platforms:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/inlets/inletsctl&quot;&gt;inletsctl&lt;/a&gt; creates a cloud VM with a public IP address, and the inlets server pre-installed, the command line for the inlets client is printed out after&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/inlets/inlets-operator&quot;&gt;inlets-operator&lt;/a&gt; runs inside a Kubernetes cluster, and creates a cloud VM for Kubernetes clusters whenever it detects a service of type LoadBalancer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why are you making a SaaS now?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Making a SaaS for inlets always seemed counter-intuitive, why not use one of the established products? But increasingly, we saw users drawn to the user experience, quality of integration and versatility of inlets. Our team even started to find creative ways to make inlets feel like a SaaS - by using a single VM for multiple different websites, or by setting up inlets tunnel servers on a Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;So we’ve built an extension for inlets that makes it into a SaaS, and have already started using it ourselves. Now we’re looking for early users to try it out and provide feedback.&lt;/p&gt;

&lt;blockquote class=&quot;twitter-tweet&quot; data-conversation=&quot;none&quot;&gt;&lt;p lang=&quot;en&quot; dir=&quot;ltr&quot;&gt;Just gave this a try to get ingress for my OpenFaaS development gateway running in a private k3d cluster. &lt;a href=&quot;https://t.co/05WMFwiPvP&quot;&gt;pic.twitter.com/05WMFwiPvP&lt;/a&gt;&lt;/p&gt;&amp;mdash; Han Verstraete (@welteki) &lt;a href=&quot;https://twitter.com/welteki/status/1803002720602751210?ref_src=twsrc%5Etfw&quot;&gt;June 18, 2024&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async=&quot;&quot; src=&quot;https://platform.twitter.com/widgets.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;

&lt;h2 id=&quot;how-hosted-tunnels-the-saas-works&quot;&gt;How hosted tunnels (the SaaS) works&lt;/h2&gt;

&lt;p&gt;The inlets client is run inside your Kubernetes cluster as a Deployment just like self-hosted inlets, the magic comes in with how the tunnel server is managed for you.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2024-06-saas-inlets/conceptual.jpeg&quot; alt=&quot;Conceptual architecture for the SaaS&quot; /&gt;&lt;/p&gt;
&lt;blockquote&gt;
  &lt;p&gt;Conceptual architecture for the SaaS&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For the initial version of the SaaS, we’re offering what we’re calling an &lt;em&gt;Ingress Tunnel&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;An Ingress Tunnel:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Exposes an Ingress Controller or Istio Gateway to the public internet&lt;/li&gt;
  &lt;li&gt;Uses ports 80 and 443&lt;/li&gt;
  &lt;li&gt;Supports ACME HTTP01 and DNS01 challenges for Let’s Encrypt&lt;/li&gt;
  &lt;li&gt;Supports WebSockets, HTTP/2 and gRPC&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each tenant’s Ingress Tunnel gets its own dedicated deployment of an inlets tunnel server, which at idle takes up about 3MB of RAM. The tunnel server is a single binary written in Go, and is designed to be very efficient.&lt;/p&gt;

&lt;p&gt;You’ll need the following:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;A domain under your control&lt;/li&gt;
  &lt;li&gt;The ability to create a DNS CNAME entry to our SaaS cluster’s public IP address&lt;/li&gt;
  &lt;li&gt;A Kubernetes cluster with an Ingress Controller or Istio Gateway&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can use any Ingress Controller, such as ingress-nginx, Traefik, Kong, or an Istio Gateway.&lt;/p&gt;

&lt;p&gt;Once you tell us the domain names you want to expose via your &lt;em&gt;Ingress Tunnel&lt;/em&gt;, we’ll provide you with YAML for the tunnel client, which runs inside your cluster.&lt;/p&gt;

&lt;p&gt;From there, you can go ahead and use Ingress as if your cluster was being provided by AWS or a similar managed service.&lt;/p&gt;

&lt;h2 id=&quot;a-quick-walk-through&quot;&gt;A quick walk-through&lt;/h2&gt;

&lt;p&gt;Here’s a quick walk-through so you can try out the service. We’ll use OpenFaaS Community Edition (CE), along with ingress-nginx and Let’s Encrypt. &lt;a href=&quot;https://arkade.dev&quot;&gt;arkade.dev&lt;/a&gt; will be used to keep the commands simple, but you can use Helm or kubectl if you like to make work for yourself.&lt;/p&gt;

&lt;p&gt;We’d suggest going end to end with these instructions before switching over to one of your own services like a Grafana dashboard, or blog, etc.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Create a Kubernetes cluster, some options:
    &lt;ul&gt;
      &lt;li&gt;Start one inside Docker with kind or k3d on your machine&lt;/li&gt;
      &lt;li&gt;Setup a VM in your homelab and install k3s with &lt;a href=&quot;https://k3sup.dev&quot;&gt;k3sup&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;Flash an SD card with Ubuntu 22.04 or Raspberry Pi OS lite and install k3s with &lt;a href=&quot;https://k3sup.dev&quot;&gt;k3sup&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Install the &lt;a href=&quot;https://github.com/kubernetes/ingress-nginx&quot;&gt;ingress-nginx&lt;/a&gt; via &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;arkade install ingress-nginx&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Install cert-manager to obtain certificates via Let’s Encrypt with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;arkade install cert-manager&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then, provide us with the list of domains you want to expose i.e. &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;openfaas.example.com&lt;/code&gt; (replace “example.com” with your own domain).&lt;/p&gt;

&lt;p&gt;For each domain name, create a DNS CNAME record to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;saas.inlets.dev&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Check that &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;nslookup openfaas.example.com&lt;/code&gt; resolves to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;saas.inlets.dev&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We’ll then provide you with YAML for the inlets tunnel client, which creates a Deployment, apply it and then check its logs to see if it’s connected properly:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;kubectl get deploy/alexellis-inlets-client
NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
alexellis-inlets-client   1/1     1            1           21h

&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;kubectl logs deploy/alexellis-inlets-client
               ___       __  
  __  ______  / &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;_&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;___  / /__
 / / / / __ &lt;span class=&quot;se&quot;&gt;\/&lt;/span&gt; / / __ &lt;span class=&quot;se&quot;&gt;\/&lt;/span&gt; //_/
/ /_/ / /_/ / / / / / / ,&amp;lt;   
&lt;span class=&quot;se&quot;&gt;\_&lt;/span&gt;_,_/ .___/_/_/_/ /_/_/|_|  
    /_/                      

inlets &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;tm&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; uplink client: 0.9.21 - b0c7ed2beeb6f244ecac149e3b72eaeb3fb00d23
All rights reserved OpenFaaS Ltd &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;2023&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;time&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;2024/06/18 10:18:55&quot;&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;level&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;info &lt;span class=&quot;nv&quot;&gt;msg&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;Connecting to proxy&quot;&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;time&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;2024/06/18 10:18:56&quot;&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;level&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;info &lt;span class=&quot;nv&quot;&gt;msg&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;Connection established&quot;&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;client_id&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;4458bf47cf7a4022834ad42f67307e0d
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;To install OpenFaaS CE, you can use the Helm chart with TLS and Ingress enabled &lt;a href=&quot;https://docs.openfaas.com/reference/tls-openfaas/&quot;&gt;by following these instructions&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You should see your ingress entry, along with the domain you provided us with:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl get ingress &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; openfaas
NAME               CLASS   HOSTS                 ADDRESS   PORTS     AGE
openfaas-ingress   nginx   openfaas.example.com            80, 443   19h
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Then, watch for the certificates to be obtained by cert-manger:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl get certificates &lt;span class=&quot;nt&quot;&gt;-A&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--watch&lt;/span&gt;

NAMESPACE   NAME                    READY   SECRET                  AGE
openfaas    openfaas-gateway-cert   True    openfaas-gateway-cert   19h
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You can then access OpenFaaS via its UI or CLI, use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;arkade info openfaas&lt;/code&gt; for more instructions.&lt;/p&gt;

&lt;p&gt;Use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://openfaas.example.com&lt;/code&gt; for the address for the OpenFaaS gateway, replacing the domain with your own.&lt;/p&gt;

&lt;h2 id=&quot;wrapping-up&quot;&gt;Wrapping up&lt;/h2&gt;

&lt;p&gt;In a matter of seconds, you can start routing traffic to your Ingress Controller or Istio Gateway without having to set up any additional infrastructure, firewall rules, NAT, or cloud VMs. Just provide us with the DNS names for each website you want to host, create a DNS CNAME record to our public IP address, and our SaaS will take care of the rest.&lt;/p&gt;

&lt;p&gt;OpenFaaS CE was chosen for a test application because its chart has built-in options for Ingress and TLS, and it’s relatively quick and easy to install. Of course you will have your own applications that you want to expose, and whilst we used one particular option for Ingress, there are many others and they’ll all work.&lt;/p&gt;

&lt;p&gt;If you’d like to try out an &lt;em&gt;Ingress Tunnel&lt;/em&gt; that’s hosted by the inlets team, then please get in &lt;a href=&quot;https://inlets.dev/contact&quot;&gt;touch with us via the website&lt;/a&gt; or &lt;a href=&quot;https://x.com/alexellisuk/status/1802970100791665104&quot;&gt;reach out to me on X&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&quot;qa&quot;&gt;Q&amp;amp;A&lt;/h3&gt;

&lt;p&gt;Q. What does it cost for each &lt;em&gt;Ingress Tunnel&lt;/em&gt;?&lt;/p&gt;

&lt;p&gt;A. The hosted &lt;em&gt;Ingress Tunnel&lt;/em&gt; will be free during our testing period. You can set up your own self-managed tunnel server at any time, either manually, with &lt;a href=&quot;https://github.com/inlets/inletsctl/&quot;&gt;inletsctl&lt;/a&gt; or the &lt;a href=&quot;https://github.com/inlets/inlets-operator&quot;&gt;inlets-operator&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Q. Who gets priority access to Ingress Tunnels?&lt;/p&gt;

&lt;p&gt;A. We’ll try to keep up with demand, but anyone who is an &lt;a href=&quot;https://inlets.dev/pricing&quot;&gt;existing inlets subscriber&lt;/a&gt; or who &lt;a href=&quot;https://github.com/sponsors/alexellis&quot;&gt;sponsors me via GitHub&lt;/a&gt; will get priority access.&lt;/p&gt;

&lt;p&gt;Q. What if I want to expose more than one domain?&lt;/p&gt;

&lt;p&gt;A. Just tell us the names of each, and we’ll configure the Ingress Tunnel for you.&lt;/p&gt;

&lt;p&gt;Q. What if I want to expose more than one cluster?&lt;/p&gt;

&lt;p&gt;A. That’s not a problem, each cluster will get its own tunnel client connection information.&lt;/p&gt;

&lt;p&gt;Q. Can I expose a TCP port for a database or another service?&lt;/p&gt;

&lt;p&gt;A. Services like MongoDB, Postgresql and NATS can be exposed via a self-managed inlets TCP tunnel server.&lt;/p&gt;

&lt;p&gt;Q. How does this compare to Wireguard?&lt;/p&gt;

&lt;p&gt;A. Wireguard is a VPN for connecting hosts privately, not for exposing services to the public Internet. Some inlets users use both - for different things.&lt;/p&gt;

&lt;p&gt;Q. Can I expose my Raspberry Pi cluster?&lt;/p&gt;

&lt;p&gt;A. Yes.&lt;/p&gt;

&lt;p&gt;Q. If I run a cluster with KinD or K3d on my laptop, what happens when I go to a cafe?&lt;/p&gt;

&lt;p&gt;A. When you shut down Docker, the tunnel will disconnect. It will reconnect when you restart Docker Desktop, whichever network you happen to be using your laptop on.&lt;/p&gt;

&lt;p&gt;Q. If I’m already hosting a tunnel server, should I switch to an &lt;em&gt;Ingress Tunnel&lt;/em&gt;?&lt;/p&gt;

&lt;p&gt;A. If you’re content with your current setup, feel free to carry on as you are. But you’re welcome to test out the SaaS and see if it’s a better fit for you.&lt;/p&gt;</content><author><name>Alex Ellis</name></author><category term="blog" /><category term="kubernetes" /><category term="ingress" /><summary type="html">Learn how to expose Ingress from your Kubernetes cluster like magic without having to setup any additional infrastructure.</summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://inlets.dev/images/2024-06-saas-inlets/background.png" /><media:content medium="image" url="https://inlets.dev/images/2024-06-saas-inlets/background.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry></feed>