Build you a `:telemetry` for such learn!
vereis opened this issue · comments
Build you a :telemetry
for such learn!
One of the questions I hear time and time again when talking about Elixir is that despite the plethora of great material out there teaching these programming languages, more could be done diving into how one can use said languages to develop real world software.
I'm personally a proponent of learning by doing, and love following books such as Crafting Interpreters to learn new concepts/ideas by building software. Having said that, writing a book is both daunting and admittedly perhaps out of my reach currently, but this idea is still very practical and useful on a small scale!
This post, as the first in a series of posts I hope to write, is meant to be the same kind of thing: implementing a really small, rough version of commonly used libraries/frameworks/projects in Erlang/Elixir which you can follow through, commit-by-commit, concept-by-concept, hopefully learning a bit about how these languages are used in the real world and picking up a few things here and there.
Without much ado, we'll aim to build a small implementation of the :telemetry library in Elixir!
For those more interested in viewing said artefact, instead of following along, you can also view this repository which contains annotated source code for what this blogpost is attempting to build. This may also be a useful aide in following the post itself! Ideally each commit will build a new complete unit of work we can show off and talk about, so make use of the commit history!
What is :telemetry
?
In short, :telemetry
is a small Erlang library which allows you to essentially broadcast events in your application's business logic, and implement handlers responsible for listening to these events and doing work in response to them.
Some example usage might be as follows:
defmodule MyApp.User do
def login(username, password) do
if password_correct?(username, password) do
:telemetry.execute([:user, :login], %{current_time: DateTime.utc_now()}, %{success: true, username: username})
:ok
else
:telemetry.execute([:user, :login], %{current_time: DateTime.utc_now()}, %{success: false, username: username})
:ok
end
end
defp password_correct?(username, password), do: false
end
When this :telemetry.execute/3
function is called, any handlers implemented which listen for the event [:user, :login]
are run, and these handlers can be implemented within the scope of your own project (i.e. Phoenix LiveView's LiveDashboard feature listens to metrics being raised elsewhere in your application), but also between applications (i.e. if you have a dependency which handles the database like Ecto, you can write your own custom handlers for its telemetry events).
:telemetry
also provides some other functions such as span/3
which is meant to measure the time a given function takes to execute, and a bunch of other goodies you can get from using other libraries built on top of :telemetry
but these are all just abstractions utilising the base execute/3
function we outlined above, and as such the focus on this project will be building a minimal implementation of that.
It is important to note that :telemetry
is an Erlang library, but we will focus on building our clone of it in Elixir. This is because Erlang code is notoriously hard to read for someone without any experience reading/writing Erlang, but an Erlang version of this post can be written if people think it would be helpful. Writing it in Elixir will be a helpful deep dive in how to bootstrap an Elixir project, and also hopefully explain how :telemetry
works—hopefully serving as a Rosetta Stone for people unfamiliar but interested in Erlang.
Prerequisites
The only thing to start this project is to have a version of Elixir installed. I'm personally writing this post with Elixir 1.10.4 as that is a reasonably up to date version of Elixir at the time of writing.
Because I'm using NixOS, I've also included files to boostrap the developer environment with lorri into the example project. If you're using this too, you should be able to simply copy the shell.nix
and env.sh
files into your local working directory.
For asdf users, follow your package manager's installation procedure for this project, or get a version of Elixir from literally anywhere else.
Starting the project
The first thing we need to do to start writing code is to bootstrap a new Elixir project. Thankfully, Elixir comes with a build tool called mix
which can make this first step more or less trivial!
To create a new project with mix
, you can simply run mix new $MY_PROJECT_NAME
and it'll generate the minimal file structure you need to start hacking on it.
By default, mix
will generate an empty project with no supervision tree, which for our usecase would cause us to have to do some unneccessary work as mix
supports generating different templates (even custom ones) such as umbrella applications or libraries with supervision trees. You can run mix help new
to see more detailed information.
For our usecase, we'll actually want to create a project with a supervision tree and application callback. This means that this project is startable by any project which includes this one as a dependency, and it will have its own supervision tree and processes which can automatically be spawned—this perhaps is getting ahead of ourselves but we'll need this further along.
Another consideration we'll have to make is the name of our library. Because mix
generates a few files for us describing our project, we want to meaningfully name it. It is a convention in the Elixir ecosystem that for applications named :credo
, :absinthe
, :phoenix
, there exist top level modules Credo
, Absinthe
, and Phoenix
. This isn't actually the case for us though, as if we want to publish this package and make it widely available, we can't choose a name that already exists on hex.pm which of course, :telemetry
already does.
We can circumvent this by generating a new project via the following snippet which will generate the standard Elixir application boilerplate, set up a basic supervision tree and name your project :build_you_a_telemetry
but name the core module namespace Telemetry
instead:
mix new build_you_a_telemetry --sup --module Telemetry
Once this is done, we can verify that everything is hooked up together by running mix test
which should be all good. This completes the first step of building a :telemetry
clone in Elixir. Your local project should now roughly look like the result of this commit in the example repository
If you now take a look into the lib/
directory, we can see that mix
has generated a top level telemetry.ex
module. This is where we will be adding all of our top level functions. It has also generated a lib/telemtry/
directory where we can add any modules tailored to more specific behaviour which we don't neccessarily want to expose at the top level of our library, we'll get to this shortly. I bring this up because conventionally the test/
directory should pretty much mimic the file structure of the lib/
directory, so when we do get around to adding more files in lib/telemetry/
we will be doing so for the test/
directory also.
Scaffolding our implementation
I'm very much not a proponent of TDD in that I don't believe in dogmatically writing tests before the writing any code, however, for this post I think this is a good starting point as we can start taking a look at what our spec is.
If we look at the brief usage example we've given above, we know that the core entrypoint of Telemetry
is the execute/3
function, so we can add a test to tests/telemetry_test.exs
to assert this contract:
# `describe` blocks let us group related unit tests. It is a fairly common convention to
# have a `describe` block for each main function in your modules, with individual nested
# test cases to describe behaviour based on different permutations of arguments/state.
describe "execute/3" do
test "returns :ok" do
assert :ok = Telemetry.execute([:example, :event], %{latency: 100}, %{status_code: "200"})
end
end
Of course, if we run mix test
, this will fail with the following output:
warning: Telemetry.execute/3 is undefined or private
test/telemetry_test.exs:7: TelemetryTest."test execute/3 returns :ok"/1
1) test execute/3 returns :ok (TelemetryTest)
test/telemetry_test.exs:6
** (UndefinedFunctionError) function Telemetry.execute/3 is undefined or private
code: assert :ok = Telemetry.execute([:example, :event], %{latency: 100}, %{status_code: "200"})
stacktrace:
(build_you_a_telemetry 0.1.0) Telemetry.execute([:example, :event], %{latency: 100}, %{status_code: "200"})
test/telemetry_test.exs:7: (test)
We can see that, of course, Telemetry.execute/3
hasn't been defined yet so the test is failing. Let's quickly add the minimal implementation that passes this test case into lib/telemetry.ex
:
def execute(_event, _measurements, _metadata) do
:ok
end
And running mix test
again reveals that this passed as expected:
Finished in 0.03 seconds
1 test, 0 failures
For those following this on the example project, this change implements what we just added.
Storing state
What we expect to happen now is that any attached event handlers listening for this event being raised are to be executed. Let's think about what we need to do to get this to work:
- Someone calls
Telemetry.attach/4
to attach an event handler toTelemetry
. - Someone calls
Telemetry.execute/3
like in our test. - The invokation of
Telemetry.execute/3
causes the handler which was attached in step 1 to be executed.
This implies that there is some hidden, shared state that lives somewhere, which allows us to remember what handlers were registered for a given event to enable us execute them when said events are raised. There are a few approaches for doing this in the Elixir world, but in short they all boil down to either using a seperate process (such as an Agent
or GenServer
), or by writing to an :ets
table.
Usually, using a GenServer
(or Agent
, though GenServer
s are more flexible) is the more idiomatic approach for just storing some state, as what we'll need is actually quite simple, but :telemetry
actually uses an :ets
table for this as it gives us a bit more functionality than simply storing state. We'll do the same to try and match the functionality of :telemetry
, but it's also beneficial for learning as we'll need to bootstrap some processes to handle this anyway.
What is an :ets
table anyway?
The linked documentation for :ets
can be quite daunting, but a high level overview is that :ets
provides a way for us to create in-memory KV databases.
The cool thing about :ets
is that it actually gives us constant-time data access, and it's super quick and easy to create/teardown :ets
tables whenever we need to. The important thing is that :ets
tables have to be explicitly owned by a process and if that owner process dies for whatever reason, the :ets
table by default will be terminated.
We can also create :ets
tables in a few different configurations, to enable/disable read/write concurrency and even change how we store entries (1 value per key, order by key vs insert time, many unique values per key, many potentially duplicate values per key). Tables can also be configured to be globally readable/writable, writable by owner but globally readable, or only readable/writable to owner.
It's an extremely useful piece of machinary that has a lot of configuration options, so it's definitely worth reading into a bit despite the dense documentation. There are even extensions such as :dets
and :mnesia
which build atop :ets
to provide disk-based persistance and distribution support respectively.
Implementing handler attachment
Because :ets
tables need to be owned by processes, we need to write a minimal process which will own, and provide utility functions, for reading/writing to this table.
Thankfully, doing this is pretty painless because of the supervision tree boilerplate we already generated with our project. We just need to create a new module that has the use GenServer
macro as follows:
defmodule Telemetry.HandlerTable do
use GenServer
def start_link(_args) do
GenServer.start_link(__MODULE__, nil, name: __MODULE__)
end
@impl GenServer
def init(_args) do
# We need to create a table with the format `duplicate_bag` as it needs to be
# able to handle many different entries for the same key.
table = :ets.new(__MODULE__, [:protected, :duplicate_bag, read_concurrency: true])
{:ok, %{table: table}}
end
end
This creates a minimal GenServer
named Telemetry.HandlerTable
that simply creates a new :ets
table as part of it's initialization and stores the reference to that table as part of the GenServer
's state. As an aside, a convention I like to follow is to define a module such as Telemetry.HandlerTable.State
that defines a typed struct to represent the GenServer
state instead of passing maps around ad-hoc. This can be done in this case via:
defmodule Telemetry.HandlerTable.State do
defstruct table: nil
end
Because GenServer
s are asynchronous processes that can encapsulate business logic and state, in order to get them to do anything, we need to send them a message, and have them handle said message. The GenServer
abstraction provides a bunch of callbacks we can use, but because we want to synchronously do work when someone calls Telemetry.attach/4
and return a response to them, we need to implement the handle_call
callback as follows:
@impl GenServer
def handle_call({:attach, {id, event, function}}, _from, %State{} = state) do
true = :ets.insert(state.table, {event, {id, function}})
{:reply, :ok, state}
end
One can call this by running GenServer.call(Telemetry.HandlerTable, {:attach, {id, event, function}})
which we can encapsulate in a function to make the api slightly nicer:
def attach(id, event, function) do
GenServer.call(Telemetry.HandlerTable, {:attach, {id, event, function, options}})
end
This function can be added either to the current Telemetry.HandlerTable
module or to the top level Telemetry
module. I personally prefer that the API and callbacks of a GenServer
are in one file, so I'll add it to Telemetry.HandlerTable
. Elixir also provides us a nice macro called defdelegate
which allows us to expose functions from one module in another, which we can add to the top level Telemetry
module to do the actual implementation of Telemetry.attach/4
as follows:
alias Telemetry.HandlerTable
defdelegate attach(handler_id, event, function, opts), to: HandlerTable
Once this is done, we simply start the GenServer
in our lib/telemetry/application.ex
and our :ets
table and server will automatically be started alongside our library:
defmodule Telemetry.Application do
@moduledoc false
use Application
def start(_type, _args) do
children = [
{Telemetry.HandlerTable, nil}
]
opts = [strategy: :one_for_one, name: Telemetry.Supervisor]
Supervisor.start_link(children, opts)
end
end
And for completeness we can add some unit tests for all of the functions we've implemented, which for brevity you can see as part of this commit. All that is left now is to wire up the Telemetry.execute/3
function to call the functions we've saved in :ets
.
Handling events on Telemetry.execute/4
So now that we have a list of functions attached for a given event :: list(atom)
, we just need to list all of those functions and execute them as part of Telemetry.execute/4
. To do this we need to add a new handle_call/3
to our Telemetry.HandlerTable
GenServer
as follows:
def list_handlers(event) do
GenServer.call(__MODULE__, {:list, event})
end
@impl GenServer
def handle_call({:list, event}, _from, %State{} = state) do
handler_functions =
Enum.map(:ets.lookup(state.table, event), fn {^event, {_id, function, _opts}} ->
function
end)
{:reply, handler_functions, state}
end
Whilst it's not neccessary to expose this function in our top level API module, :telemetry
does it (possibly for allowing other libraries to hook into it) so we can expose our variant of this as well via:
defdelegate list_handlers(event), to: HandlerTable
This should be a pretty small change, which we can quickly unit test, as per this commit.
This function returns a list of the attached functions for a given event, so we can update our implementation of Telemetry.execute/3
to simply iterate over all of these functions and execute them:
def execute(event, measurements, metadata) do
for handler_function <- list_handlers(event) do
handler_function.(event, measurements, metadata)
end
:ok
end
We can now augment our existing execute/4
tests by testing that handlers are actually executed!
describe "execute/3" do
test "returns :ok" do
assert :ok = Telemetry.execute([:example, :event], %{latency: 100}, %{status_code: "200"})
end
test "returns :ok, any attached handlers are executed" do
test_process = self()
assert :ok =
Telemetry.attach(
"test-handler-id-1",
[:example, :event],
fn _, _, _ ->
send(test_process, :first_handler_executed)
end,
nil
)
assert :ok =
Telemetry.attach(
"test-handler-id-2",
[:example, :event],
fn _, _, _ ->
send(test_process, :second_handler_executed)
end,
nil
)
assert :ok = Telemetry.execute([:example, :event], %{latency: 100}, %{status_code: "200"})
assert_received(:first_handler_executed)
assert_received(:second_handler_executed)
end
end
You can see now that with this commit, the basic core functionality of the :telemetry
library is complete.
Tooling and correctness
Before continuing to polish the core business logic of our library (there are a few small things left to do), this is a good point to introduce some common tooling I like to bring into my projects to make them more correct. I'd like to introduce two main tools:
Credo
—from what I understand this is basically like Elixir's version of Rubocop or ESlint. It's nice for improving and enforcing code readability and consistency rules among a few other things!:dialyzer
and it's Elixir wrapperDialyxir
—static analyzer which can point out definite typing errors among other things!
Credo
First off, we'll add Credo
to our list of dependencies in our mix.exs
file:
defp deps do
[
{:credo, "~> 1.5", only: [:dev, :test], runtime: false}
]
end
After this is done, all we need to do is run mix deps.get
to download packages from hex.pm
and we're ready to continue. This dependency is set up to not actually execute at runtime. It basically just provides a new mix
target called mix credo
which will execute and report any problems it finds.
If we take a look at the example repository at the most recent commit until now, running mix credo
actually results in the following errors:
==> credo
Compiling 213 files (.ex)
Generated credo app
Checking 6 source files ...
Code Readability
┃
┃ [R] → Modules should have a @moduledoc tag.
┃ lib/telemetry/handler_table.ex:1:11 #(Telemetry.HandlerTable)
┃ [R] → Modules should have a @moduledoc tag.
┃ lib/telemetry/handler_table.ex:4:13 #(Telemetry.HandlerTable.State)
Please report incorrect results: https://github.com/rrrene/credo/issues
Analysis took 0.02 seconds (0.01s to load, 0.01s running 45 checks on 6 files)
14 mods/funs, found 2 code readability issues.
It's saying that we haven't properly documented our modules we added, which is very true. If we add some @moduledoc
tags and re-run mix credo
we see that no further issues are reported! (This is surprising given the fact I've just been writing code in parallel to writing this post but hey ho!)
You can see the overall changes made to the example repository here.
Dialyzer and Dialyxir
Similarly to how we added Credo
as a dependency of our application, we want to do the same with Dialyxir
. Actually, in the BEAM world, :dialyzer
comes as part of Erlang's standard library but using it raw from Elixir is a little painful. Dialyxir
is a simple wrapper for :dialyzer
which adds to mix
a mix dialyzer
target for you to run.
One should be warned that :dialyzer
caches data after running in what is called a PLT
&mdasha persistant lookup table&mdashbut of course, this does not exist for the first run. This means :dialyzer
can take a really long time to run initially so don't be alarmed if this takes awhile to follow through: :dialyzer
is statically analyzing your entire library, and the entire standard library! One should also set up some options to tell Dialyzer
where to put the PLT
, a full mix.exs
configuration can be seen as follows:
diff --git a/mix.exs b/mix.exs
index 3db7007..7026ed5 100644
--- a/mix.exs
+++ b/mix.exs
@@ -7,7 +7,10 @@ defmodule Telemetry.MixProject do
version: "0.1.0",
elixir: "~> 1.10",
start_permanent: Mix.env() == :prod,
- deps: deps()
+ deps: deps(),
+ dialyzer: [
+ plt_file: {:no_warn, "priv/plts/dialyzer.plt"}
+ ]
]
end
@@ -22,7 +25,8 @@ defmodule Telemetry.MixProject do
# Run "mix help deps" to learn about dependencies.
defp deps do
[
- {:credo, "~> 1.5", only: [:dev, :test], runtime: false}
+ {:credo, "~> 1.5", only: [:dev, :test], runtime: false},
+ {:dialyxir, "~> 1.0", only: [:dev], runtime: false}
]
end
end
To execute :dialyzer
, simply run mix dialyzer
and wait for it to finish running:
Starting Dialyzer
[
check_plt: false,
init_plt: '/home/chris/git/vereis/build_you_a_telemetry/priv/plts/dialyzer.plt',
files: ['/home/chris/git/vereis/build_you_a_telemetry/_build/dev/lib/build_you_a_telemetry/ebin/Elixir.Telemetry.Application.beam',
'/home/chris/git/vereis/build_you_a_telemetry/_build/dev/lib/build_you_a_telemetry/ebin/Elixir.Telemetry.HandlerTable.State.beam',
'/home/chris/git/vereis/build_you_a_telemetry/_build/dev/lib/build_you_a_telemetry/ebin/Elixir.Telemetry.HandlerTable.beam',
'/home/chris/git/vereis/build_you_a_telemetry/_build/dev/lib/build_you_a_telemetry/ebin/Elixir.Telemetry.beam'],
warnings: [:unknown]
]
Total errors: 0, Skipped: 0, Unnecessary Skips: 0
done in 0m0.92s
done (passed successfully)
In my case, between the previous commit and this commit, everything worked out again (surprisingly) but having :dialyzer
set up is indispensible for helping to catch bugs before them happen.
It's also possible to add custom type annotations to our code to help :dialyzer
out when it's looking for type errors. Type annotations also serve as good documentation so in my opinion it's always good practice to add typespecs at least to the main functions for any public modules. You can see this commit for an example of how to go about doing this.
Mix aliases
Now that we have Credo
and Dialyxir
set up and running, we can ponder about maybe setting up a CI pipeline that runs our tests, Credo
, and Dialyzer
in turn. Elixir provides a way of doing this with mix aliases
which are just functions you can define in your mix.exs
file:
defp aliases do
[lint: ["format --check-formatted --dry-run", "credo --strict", "dialyzer"]]
end
def project do
[
aliases: aliases(),
...
]
end
...
What this does is run mix format --check-formatted --dry-run
to ensure that all files are formatted as per your .formatter.exs
file (add one if you didn't get one generated with your project), credo
and dialyzer
in one fell swoop. Ideally in your CI pipeline from here, you can simply have threw different steps:
- Run
mix compile
to make sure everything actually builds correctly. For good measure I setwarnings_as_errors: true
so warnings don't leak into my code. - Run
mix lint
to ensure code quality, conventions etc are as we expect. - Run
mix test --cover
to ensure all tests pass and to ensure that we have an adequate test coverage.
I'll leave implementing CI to the reader though, as that varies a lot depending on your preferences and repository hosting solution, but you can see our latest bit of progress on the example repository.
Final changes
Now that we've gotten out of the rabbit-hole which is setting up some useful tooling for our continued development, we need to add one last main feature: exception handling.
If we look at the implementation of :telemetry
's execute/3
function equivalent, we can see that it's actually not just executing the handler functions. While iterating over the list of attached handler functions, we're making sure to wrap the handler execution in a try..catch..rescue
to prevent the caller from exploding if something goes wrong!
This way, the caller never has to worry about :telemetry
calls blowing up the calling context, instead, we want to react to these exceptions and silence them, removing any handlers which fail from the handler table.
We can pretty much copy the approach taken here ad hoc, we need to update our Telemetry.execute/3
function as we do below, but we also need to be able to selectively detach a given handler. This is why we always pass in a handler_id
when attaching handlers. These are meant to be unique IDs for us to identify individual handlers with. We will need to update our Telemetry.HandlerTable
to treat these keys as unique, as well as implement a way of deleting handlers:
# lib/telemetry.ex
@spec detach_handler(handler_id(), event()) :: :ok
defdelegate detach_handler(handler_id, event), to: HandlerTable
@spec execute(event(), measurements(), metadata()) :: :ok
def execute(event, measurements, metadata) do
for {handler_id, handler_function} <- list_handlers(event) do
try do
handler_function.(event, measurements, metadata)
rescue
error ->
log_error(event, handler_id, error, __STACKTRACE__)
detach_handler(handler_id, event)
catch
error ->
log_error(event, handler_id, error, __STACKTRACE__)
detach_handler(handler_id, event)
end
end
:ok
end
defp log_error(event, handler, error, stacktrace) do
Logger.error("""
Handler #{inspect(handler)} for event #{inspect(event)} has failed and has been detached.
Error: #{inspect(error)}
Stacktrace: #{inspect(stacktrace)}
""")
end
# lib/telemetry/table_handler.ex
@impl GenServer
def handle_call({:attach, {handler_id, event, function, options}}, _from, %State{} = state) do
# Pattern match the existing table to make sure that no existing handlers exist for the given
# event and handler id.
#
# If nothing is found, we're ok to insert this handler. Otherwise fail
case :ets.match(state.table, {event, {handler_id, :_, :_}}) do
[] ->
true = :ets.insert(state.table, {event, {handler_id, function, options}})
{:reply, :ok, state}
_duplicate_id ->
{:reply, {:error, :already_exists}, state}
end
end
@impl GenServer
def handle_call({:list, event}, _from, %State{} = state) do
response =
Enum.map(:ets.lookup(state.table, event), fn {^event, {handler_id, function, _opts}} ->
{handler_id, function}
end)
{:reply, response, state}
end
@impl GenServer
def handle_call({:detach, {handler_id, event}}, _from, %State{} = state) do
# Try deleting any entry in the ETS table that matches the pattern below, caring only
# about the given event and handler_id.
#
# If nothing is deleted, return an error, otherwise :ok
case :ets.select_delete(state.table, [{{event, {handler_id, :_, :_, :_}}, [], [true]}]) do
0 ->
{:reply, {:error, :not_found}, state}
_deleted_count ->
{:reply, :ok, state}
end
end
@spec detach_handler(Telemetry.event(), handler_id :: String.t()) :: :ok | {:error, :not_found}
def detach_handler(handler_id, event) do
GenServer.call(__MODULE__, {:detach, {handler_id, event}})
end
We can add unit tests for the indivdual handle_call
clause we added:
test "{:detach, {event, handler}} returns :ok when trying to add delete attached handler", %{
state: state
} do
assert {:reply, :ok, _new_state} =
Telemetry.HandlerTable.handle_call(
{:attach, {"my-event", [:event], fn -> :ok end, nil}},
nil,
state
)
assert {:reply, :ok, _new_state} =
Telemetry.HandlerTable.handle_call({:detach, {"my-event", [:event]}}, nil, state)
assert length(:ets.tab2list(state.table)) == 0
end
test "{:detach, {event, handler}} returns {:error, :not_found} when trying to add delete unattached handler",
%{
state: state
} do
assert {:reply, {:error, :not_found}, _new_state} =
Telemetry.HandlerTable.handle_call({:detach, {"my-event", [:event]}}, nil, state)
assert length(:ets.tab2list(state.table)) == 0
end
And lastly, we can unit test the entire flow by testing the top level lib/telemetry.ex
like so:
test "returns :ok, any attached handlers that raise exceptions are detached" do
test_process = self()
assert :ok =
Telemetry.attach(
"detach-handler-test-id-1",
[:detach, :event],
fn _, _, _ ->
send(test_process, :first_handler_executed)
end,
nil
)
assert :ok =
Telemetry.attach(
"detach-handler-test-id-2",
[:detach, :event],
fn _, _, _ ->
raise ArgumentError, message: "invalid argument foo"
end,
nil
)
assert length(Telemetry.list_handlers([:detach, :event])) == 2
assert :ok = Telemetry.execute([:detach, :event], %{latency: 100}, %{status_code: "200"})
assert_received(:first_handler_executed)
assert length(Telemetry.list_handlers([:detach, :event])) == 1
assert {:error, :not_found} =
Telemetry.detach_handler("detach-handler-test-id-2", [:detach, :event])
end
That leaves us with a more or less functional version of the :telemetry
library written in Elixir! More can definitely be built ontop of this, but for a basic starting point I think this serves a good example of how one can build an application/library end-to-end using Elixir, that serves some real, potentially ecosystem-improving benefit.
You can follow the example repository up to this commit which shows the library as what I'd call feature-complete.
Publishing the package
The last thing you need to do is to publish the package onto hex.pm. You can do this quite easily with the following steps:
- You need to register with hex.pm. You can do this by running
mix hex.user register
- You need to then add a description and package definition as to your
mix.exs
file as per this commit - You need to run
mix hex.publish
- Check out your newly minted project on hex.pm! The example repository of this project is uploaded here for instance.
Conclusion
Hopefully this blog post was, and will serve, as a helpful reference for how to start an Elixir project from beginning to end, ultimately publishing it to hex.pm.
I intend on writing more of this type of post because it's both genuinely helpful in my opinion, but also helps me consolidate my own knowledge. Thanks for taking the time to read this and stay tuned :-)