Intro Link to heading

A few months back I learned Go, mainly to force myself to learn something new. Also, because I had always been a bit fascinated with how easy Go makes concurrency. Goroutines and channels are an eye opening way of handling concurrency, much easier to do well than in other languages.

I have been really busy with work lately, so I haven’t had as much time to spend on side projects. I knew that I didn’t want to go too long without doing something in Go though, as its still too fresh in my mind. If you don’t use new skills they quickly atrophy. So as an easy project that would allow me to explore a bit more of the package landscape I decided to write a simple URL shortener. URL shortening is a relatively simple task, but my plan was to touch bits in sql, template generation, and web frameworks. The goal is to get more exposure to the Go landscape, such that when I have more time to dive into my next big project I am not spending as much time learning.

Background Link to heading

So what is a URL shortener? The idea is pretty simple, domain names with all sub-directories, path parts, and query parameters can get quite long. This is hard to share with users, especially via social platforma. The solution is to provide a short base domain with a short code that represents each longer URL. When a user goes to the short url they are redirected to the long url. In the backend all that is being done is the URL shortening service looks up the code in its database to see what the longform URL is and then does an HTTP redirect. Common URL shorteners that you have probably heard of our bitly and tinyurl. Having a good URL shortener depends on having a nice short domain that you can serve, since this was a toy project I didn’t bother getting a nice short domain (or hosting this externally).

So how is the short code generated? There are many clever ways to do this, but I decided to just go with a simple brute force approach. Decide on the number of characters we want are short code to be with what alphabet, then randomly select characters from the alphabet until a unique short code is generated. For my alphabet I went with abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789, which is 62 characters long. This will result in 62^n number of unique short codes. Since this was a toy project I just went with n=4, which results in 1,4776,336 possible short codes. I am sure a site like bitly may need more, but for playing around this was good enough.

For generation of this code I opted for a simple byte implementation:

func RandString(n int) string {
   b := make([]byte, n)
   for i := range b {
   	b[i] = letterBytes[rand.Intn(len(letterBytes))]
   }
   return string(b)
}

I grabbed this from here), I was mainly seeing if the standard go library had a random string function and found this instead. Some of the options considered in that post are much faster, but this is closer to what I would have implemented from scratch to start.

Sql packages Link to heading

Go has a handful of good sql packages depending on what you are looking for out of the package. This Jetrbrains article compares database/sql (part of the standard library), sqlx, sqlc, and GORM. The benchmark done shows that with higher number of records database/sql and sqlc scale the best. If you prefer a full ORM then GORM is likely the pick for you. Whereas sqlx provides a handful of niceties on top of database/sql.

I enjoy the expressive power of SQL, so I don’t mind being a bit closer to the actual database. So sqlc appeals to me. The goal of sqlc is to compile sql into type-safe go code. This requires that you install the sqlc cli, write your sql, then compile it into go. The limitations with sqlc is that it only supports MySQL, PostgresSQL, and SQLite. I opted for SQLite, because of its minimal dependencies and tiny db sizes. For this project I defined the following schema:

CREATE TABLE url_mapping (
 id   INTEGER PRIMARY KEY,
 longurl text NOT NULL,
 shortcode text NOT NULL,
 owner text NOT NULL
);
CREATE UNIQUE INDEX shortcode_idx ON url_mapping (shortcode);
CREATE UNIQUE INDEX longurl_idx ON url_mapping (longurl);

The shortcode and longurl both have unique indexes on them, technically longurl doesn’t need to be unique but I figured why waste multiple shortcodes on the same url? If you were using a different database a small optimization would be to specify the length of your shortcode, so that the memory could be allocated properly.

With this you then specify the queries that you would like to have functions generated for. For this case, I needed one to lookup url_mappings by shortcode, one by longurl, and a query to insert new url_mappings. An update and delete could be needed if the functionality was extended, but isn’t needed for this.

-- name: GetUrlMappingByShortcode :one
SELECT * FROM url_mapping
WHERE shortcode = ? LIMIT 1;

-- name: GetUrlMappingByLongurl :one
SELECT * FROM url_mapping
WHERE longurl = ? LIMIT 1;

-- name: CreateUrlMapping :one
INSERT INTO url_mapping (
 longurl, shortcode, owner
) VALUES (
 ?, ?, ?
)
RETURNING *;

This is mostly just regular sql. The things to note are the annotation above each query this specifies the name of the Go function that will be generated and the number of results expected. All of these just return one result, but you could change :one to :many. If your query just alters the database with no results then the RETURNING statement can be left out and :one would become :exec.

Additionally add a simple config:

version: 2
sql:
  - engine: "sqlite"
    schema: "schema.sql"
    queries: "query.sql"
    gen:
      go:
        package: "repo"
        out: "repo"

Once you have defined both your queries (query.sql), schema (schema.sql), and config (sqlc.yaml), just run sqlc generate from within the directory containing those files. This will generate three files: db.go (basic db interaction definitions), models.go (structs for each of your tables), and query.sql.go (functions for all of your queries).

From go in order to create the ddl you would do something like this:

//using Go's embed library to read in all of schema.sql
//go:embed schema.sql
var ddl string

func setupDb() (*sql.DB, error) {
   ctx := context.Background()
   db, err := sql.Open("sqlite3", "file:urls.db")

   if err != nil {
   	return db, err
   }

   // create tables ignore results and err because we expect an err if its already been done
   _, _ := db.ExecContext(ctx, ddl)

   return db, nil
}

To actually query the db is also straightforward:

queries = repo.New(db)
mapping, err := queries.GetUrlMappingByLongurl(ctx.Request().Context(), req.Longurl)

Web framework Link to heading

Go has a handful of web frameworks; echo, gin, and chi are examples. A handful of the community choses to roll their own, because with the standard library it is really easy to do so. I opted to go with echo, because it sells itself as a “High performance, extensible, minimalist Go web framework”. I didn’t test the high performance part for this project, but the extensible and minimalist claims seem to hold up well.

After installing the package in order to setup and run echo is easy:

e := echo.New()
e.Static("/static", "assets") //serves up the assests folder to /static these are all static resources
// Sets up GET endpoints with and without parameters, each one will execute and call the specified function
e.GET("/", renderHome) 
e.GET("/reset", renderReset)
e.GET("/:short", redirectShort)
// Sets up a POST endpoint
e.POST("/register", registerUrl)
// Starts the webserver
e.Logger.Fatal(e.Start(":1323"))

Each function called by an endpoint has to confirm to the func name(ctx echo.Context) error interface. Other resources needed within the function should be defined elsewhere in your module. For the register endpoint here is the implementation:

func registerUrl(ctx echo.Context) error {
	log.Info("Registering url")
	req := new(ShortenRequest)
	if err := ctx.Bind(req); err != nil {
		return err
	}
	...
	for {

		short := RandString(4)
		_, err := queries.GetUrlMappingByShortcode(ctx.Request().Context(), short)
		if errors.Is(err, sql.ErrNoRows) {
			mapping := repo.CreateUrlMappingParams{Longurl: req.Longurl, Owner: req.Email, Shortcode: short}
			created, err := queries.CreateUrlMapping(ctx.Request().Context(), mapping)
			...
			return shortenResp(ctx, created, success, status)
		}
	}
}

Some of the details I left out for brevity. The ctx.Bind method from echo will bind the incoming request to a struct that you define with standard embed annotations. The cool thing here is that you can use json, xml, form, and params all in the same struct, e.g.

Name  string `json:"name" xml:"name" form:"name" query:"name"`

Once a URL has been registered the short version can be accessed at SERVER_URL/code. Where the code was provided in the response. To handle the redirect using echo is simple:

func redirectShort(ctx echo.Context) error {
	short := ctx.Param("short")

	mapping, err := queries.GetUrlMappingByShortcode(ctx.Request().Context(), short)
	if err != nil {
		return ctx.String(http.StatusNotFound, "No url registered for that shortcode")
	}
	return ctx.Redirect(http.StatusFound, mapping.Longurl)
}

Conclusion Link to heading

A URL shortener is a good project to explore the particulars of the different libraries, without worrying about implementation complexity. Overall, I greatly enjoyed working with sqlc. The sql first approach is one that works well for me, if you instead prefer to start with objects/structs then GORM is likely a better choice. Echo is really straightforward to use and extend. I didn’t test the speed, but for this application it was plenty fast. I will likely explore some of the other web frameworks, to get a better feel for what I prefer in Go.

For this project I also explored using templ for template generation, htmx for an interactive web frontend (without a lot of JS), and tailwindcss for the css framework. This post was already long enough though, so I will save going into them for another post. This was a fun project to implement and didn’t take too much time. Previously I understood the idea behind a URL shortener, but I find true understanding comes through hands on development. The source code for this project can be found here.