As all changes are already written into history, why not enforce the
proper changing of modified_at and modified_by columns?
This way we can be sure these are properly changed and don't have to
take care of that in the SQL statements anymore.
This is the first major container draft and implements most of the
functions to create containers, remove them, show and set attributes on
them.
It also implements the special case in dim, that a list of all
containers and their free space is returned.
It currently does not implement a partial output when a subnet or
layer3domain is returned.
First I want to know how well it actually scales and works as this is a
major pain point in the python implementation.
At the moment the output also has the problem that it can grow quite
large in memory as the tree is built in the middleware. A better way
would be to build the json directly in the database so it can be
returned directly. We will have to see when this becomes a major issue.
When a name is not mapped then the field should not be updated. This is
a bit weird and maybe should throw an error, but at least we avoid
chaning columns that are not meant to be changed.
The default not null constraint only checks for the SQL null, not a json
null.
Therefore add an extended not null constraint by checking both possible
null values.
This also adds a view to get a list of all containers and their free
space in between.
This is needed for ippool_list to get a nice overview over everything.
The code of the function is based on
https://pkg.go.dev/inet.af/netaddr#IPRange.Prefixes
Many thanks to Brad Fitzpatrick for the awesome code and changelog to
make it possible to understand what is going on.
Add support to show all pool attributes and set them.
Also add some helper functions to FieldMap to change and check the
requested attributes.
This was needed because the layer3domain needs to be set through
attributes instead of a link function (this should be changed, but for
now we will be compatible to ndcli).
So we filter out the layer3domain name and replace it with the ID, so
that the update can do both at the same time. Maybe there is a better
way to handle this in the future.
This commit consists of two things.
1. server.go will now set two variables for the current transaction, the
username and request id. These are transaction local and therefore do
not leak into the connection.
2. The initial schema received a history table and a trigger. This
trigger writes changes into the history table. When inserting records
the function will pull the transaction local variables and add them
to the record.
The trigger is added to all tables, so that a complete changelog is
created.
These changes serve as the basis for further features. One is the
searching for changes on specific resources (think history rr, history
zone, ...).
The other feature is a way to subscribe to changes in the database based
on filters. This will be the way to implement the output feature of dim.
This adds the API call to return all fields for a single layer3domain.
This should serve as a nice basis for other parts to be implemented.
All fields are put together into a single json document, merged together
with the other attributes of the table and then returned to the
requester.
This function enables the show view to be much less write overhead.
By defining which columns to return and automatically merging the
attributes into the main view, this can be made so much easier.
It doesn't support the recursive view for now, so that is something a
client would need to handle, but for now this should be good enough.
This also fixes a small issue in the update clause handler by moving
the index handler into the handler when a column was found.
If that was not done, the index gets moved to the wrong position and the
resulting query would be wrongly indexed.
When selecting content of a jsonb field the type per default is jsonb.
But we need to proper posgres type, so that the output can be parsed
properly.
Therefore make sure that the field has the proper output operator
attached.
For this to work, I have added a new function that takes a list of
key looking things and converts them into json.
At the same time, it also can convert json looking payloads and
prepare it for the database (that last part was not intended, but
works).
With the many columns where setting attributes is possible, this
functionality should help quite a bit.
This is the first draft of creating layer3domains and
ipblocks/containers.
This allows some testing with different things, like list building for
complex container output, but also how containers should behave.
Containers are subnets in vairous ranges. But only when a subnet is
assigned a pool, it is truly a subnet. When it is not assigned to a
pool, it is considered a container.
This serves the purpose of grouping or blocking containers for different
purposes, so we need to keep that up.
When extra fields are fetched from the attributes column it must be
specified from which table that should be. If not done and another table
also has an attribute column, it will end in an error.
This modifies the zone list command in such a way, that a query result
could be directly returned to the response.
With a bit of work, large query results could be rendered with a
streaming json renderer to the output.
This type represents a list of fields someone might want to have
returned.
This can be used together with the query library to build select
statements that return the exact data the user might need or want.
This way we may be able to avoid selecting data one might not need at
the end and therefore provide better performance.
This is a small library to build queries and put the result into the
world.
Currently it supports building the select clause and converting rows
into a list of maps, so that it can be returned as a list.
This package will contain all the parameter types that need parsing from
the outside world into internal types.
Each type is required to implement its own UnmarshalJSON. At this point
it should also make the checks, if the incoming data is valid input, but
is not required to check against the database.
These helpers enable the parameter parsing into method specific structs.
As the parameter list is an array, the order of arguments is important.
Sadly type checks can be done at runtime, because all parameters are
converted to a list of interface{}. So if there is an error, it will
only result in an error at runtime, so be careful.
This adds the transaction handling to the connection and context
handling.
It will raise an error and inform the client if anything is going wrong
with the transaction.