Skip to main content
Version: 8.2411.x.x RR

Shared out-of-context data

In general, the state of authentication processing is limited to the session of a user. Any state changes done within the context of a request (or by extension, within the conversation), which are not stored in the user's session, are lost when the operation has completed. In any case, data stored in these contexts can only be seen by operations running within the same context, that is, by the same user.

However, in some cases it is required to maintain a global state, which is not scoped to a particular user session and which can have a (much) longer lifetime. For this purpose, nevisAuth uses its shared out-of-context data service (OOCDS).

The out-of-context data service is provided by the OutOfContextDataService interface. It has the following properties:

  • Storing key value pairs (many of the keys used look like file paths for historical reasons).
  • Each entry has an associated expiration date and will not be visible after this date.
  • Reading, writing, removing and querying entries is guaranteed to remain consistent under load of multiple accessing threads and processes. However, no guarantees are made regarding the performance of those operations.

Use Cases

The OOCDS interface is used in several cases, among others:

  • SAML

    • To verify the uniqueness of received SAML messages.
    • To look up a SAML artifact when an ArtifactResolve message has been received.
  • OAuth / OpenID Connect

    • During the Authorization Code Flow.
  • Custom

    • Custom Java based auth states may access OOCD.
    • The ScriptState exposes this service via the scope OOCD.
    • nevisAuth expressions may access OOCD.
info

It can generally be assumed that federation protocols are using the OOCDS.

Configuration

There are 3 available options:

  • No OOCD is configured (default).
  • In memory OOCD configured by LocalOutOfContextDataStore.
  • SQL OOCD configured by RemoteOutOfContextDataStore.

In case no OOCD is configured and the usage of OOCD is attempted an error will be thrown.

Interface

The OutOfContextDataService offers the following methods:

Set key-value pair(s):

  • void set(String key, String value, Instant notOnOrAfter)
  • void set(Map<String, String> keyValuePairs, Instant notOnOrAfter)

Get key-value pair(s):

  • String get(String key)
  • Map<String, String> getWithPrefix(String keyPrefix)

Remove key-value pair(s):

  • void remove(String key)
  • void removeWithPrefix(String keyPrefix)

InMemoryOOCDService

The InMemoryOOCDService is a lightweight implementation to allow developers and integrators to use an OOCD to reduce integration / development time as it does not require a database.

caution

The in-memory OOCD must not be used in production.

Configuration options:

  • reaperPeriod (string)

    Default value: 60

    The number of seconds how often the expired entries should be removed.

The InMemoryOOCDService can be configured in the esauth4.xml by the LocalOutOfContextDataStore element. The element must appear between the SessionCoordinator and AuthEngine elements.

Configuration example:

    <LocalOutOfContextDataStore
reaperPeriod="60"/>

SqlOOCDService

The SqlOOCDService dataservice uses an SQL database as a backend to store out-of-context data. Currently, MariaDB and PostgreSQL databases are supported.

Database setup

The following steps outline the database setup required for classic deployments. This is not required in case you're using the Kubernetes-based setup.

  1. Create the connectionSchemaUser who has the rights to create the database table in a newly created database, then create the connectionUser who can modify the content of this table.
    1. You create the table by hand with the connectionSchemaUser.
    2. You rely on nevisAuth to use the connectionSchemaUserto create the table.
  2. Create the database table and the connectionUser with appropriate rights using an existing administrator user.
    1. Use an existing database.
    2. Create a new database by hand.
  3. Now create the database schema. If you want to store strings containing special characters, your database must use a charset supporting these special characters (e.g. UTF-8).
  4. Now create users to connect to the database.

Depending on your preferences, you can also re-use the NSS database and nss_auth user from the Remote Session Store setup (see chapter Session management. Note that in this case still create the user that will create the table, or you have to create the table manually by an administrator user.

caution

It is not recommended using the same user for database table creation and data modification.

By default, the SqlOOCDService automatically creates the required database table in the SQL database (can be disabled by setting the connectionAutomaticDbSchemaSetup to "false"). The code below shows the current default table definition together with the initial setup:

MariaDB

SQL Out of Context Data Service Table
CREATE DATABASE IF NOT EXISTS OOCD CHARACTER SET ='utf8' COLLATE ='utf8_unicode_ci';

CREATE USER IF NOT EXISTS `OOCDschemauser`@`localhost` IDENTIFIED BY 'password';
GRANT CREATE ON OOCD.* TO `OOCDschemauser`@`localhost`;

CREATE USER IF NOT EXISTS `OOCDdatauser`@`localhost` IDENTIFIED BY 'password';
GRANT SELECT, INSERT, UPDATE, DELETE ON OOCD.* TO `OOCDdatauser`@`localhost`;

FLUSH PRIVILEGES;

CREATE TABLE IF NOT EXISTS `nevisauth_out_of_context_data_service` (
`key` VARCHAR(1024) NOT NULL,
`value` MEDIUMTEXT NOT NULL,
`reap_timestamp` TIMESTAMP(6) NOT NULL DEFAULT CURRENT_TIMESTAMP(6),
PRIMARY KEY ( `key`),
INDEX reap_timestamp_idx (`reap_timestamp`)
);
info

To allow remote connection you can replace the localhost with % to allow any host or with a specific host.

You can adapt the table above according to your needs, if your data exceeds the limits defined above (key size of 1024 bytes and storage size of 16 MB/MEDIUMTEXT), or fits into smaller data storage types like TINYTEXT or TEXT. Note that use the same table and column names.

caution

Note that the key column size is 1024, but special characters can reduce the storage size.

MariaDB has a maximum index size limitation at 3072 bytes. This can vary based on db page size settings.

In an unlikely special case, where you only store 4 byte characters like emojis, you can only store 768 characters.

Timezone setup

Timezone database should be initialized, in case the SELECT * FROM mysql.time_zone_name; returns nothing than this means the timezone database was not yet initialized. To fix that you have to run mysql_tzinfo_to_sql /usr/share/zoneinfo | mysql -u root mysql -p.

PostgreSQL

SQL Out of Context Data Service Table
CREATE USER "OOCDschemauser" WITH encrypted password 'password';
CREATE USER "OOCDdatauser" WITH encrypted password 'password';

CREATE DATABASE OOCD
WITH
OWNER = "OOCDschemauser";

ALTER ROLE "OOCDschemauser" IN DATABASE OOCD SET search_path TO "OOCDschemauser";
ALTER ROLE "OOCDdatauser" IN DATABASE OOCD SET search_path TO "OOCDschemauser";

\connect OOCD "OOCDschemauser";
CREATE SCHEMA "OOCDschemauser" AUTHORIZATION "OOCDschemauser"

GRANT USAGE ON SCHEMA "OOCDschemauser" to "OOCDdatauser";
GRANT CONNECT ON DATABASE OOCD TO "OOCDdatauser";

ALTER DEFAULT PRIVILEGES FOR USER "OOCDschemauser" IN SCHEMA "OOCDschemauser" GRANT SELECT, INSERT, UPDATE, DELETE, TRIGGER ON TABLES TO "OOCDdatauser";
ALTER DEFAULT PRIVILEGES FOR USER "OOCDschemauser" IN SCHEMA "OOCDschemauser" GRANT USAGE, SELECT ON SEQUENCES TO "OOCDdatauser";
ALTER DEFAULT PRIVILEGES FOR USER "OOCDschemauser" IN SCHEMA "OOCDschemauser" GRANT EXECUTE ON FUNCTIONS TO "OOCDdatauser";

CREATE TABLE IF NOT EXISTS nevisauth_out_of_context_data_service (
key VARCHAR(1024) NOT NULL PRIMARY KEY,
value TEXT NOT NULL,
reap_timestamp TIMESTAMP WITH TIME ZONE NOT NULL
);
CREATE INDEX IF NOT EXISTS OOCD_reap_timestamp_idx ON nevisauth_out_of_context_data_service (reap_timestamp);

Configuring nevisAuth

The SqlOOCDService can be configured in the esauth4.xml by the RemoteOutOfContextDataStore element. The element must appear between the SessionCoordinator and AuthEngine elements.

The following list depicts the available configuration options:

  • connectionUrl (string)

    The JDBC URL to the MariaDB / PostgreSQL database. For more details regarding the syntax, see the MariaDB documentation and the PostgreSQL documentation.

    info

    nevisAuth relies on the autocommit feature of the database.

    • MariaDB requires to enable it on the database level or configure it in the JDBC driver url in this property using the query parameter autocommit=true.
    • PostgreSQL by default have autocommit enabled. :::
  • connectionUser (string)

    The username required to access the data in the database. This user must have SELECT, INSERT, UPDATE and DELETE access rights to the database. You can use the same format as for passwords. For example, you can use the following syntax to specify the username from an environment variable: pipe://echo $SYNC_USER. For more information regarding the allowed syntax, see Passwords in the configuration.

  • connectionPassword (string)

    The password of the user accessing the data. This property accepts standard encryption/obfuscation syntax. See documentation on how to restrict disclosure of passphrases in Passwords in the configuration.

  • connectionSchemaUser (string)

    The name of the user that creates the schema and tables in the database. This user must have CREATE access rights to the database. If not provided, the system will use the user specified with the attribute user to create the schema.

    info

    It is recommended that separate users create the schema and access the data. You specify these users in the DB properties schemaUser and dataUser, respectively.

  • connectionSchemaPassword (string)

    The password of the user who creates the schema and the tables in the database. If not provided, the system will use the password specified with attribute password to create the schema.

  • connectionTimeout (boolean)

    Default value: 30000

    This property controls the maximum number of milliseconds that nevisAuth will wait for a connection from the pool.

  • connectionMaxLifeTime (Integer, msec, optional, 1800000)

    Default value: 1800000 (30 minutes)

    The maximum time, in milliseconds, that a connection used in the connection pool.

  • connectionMinPoolSize (Integer, optional)

    Default value: connectionMaxPoolSize

    Mininum number of connections in the connection pool used to connect to the database. In case this is set lower then connectionMaxPoolSize, connections will be created on demand.

  • connectionMaxPoolSize (Integer, optional, 10)

    Default value: 10

    Maximum number of connections in the connection pool used to connect to the database. Changing this value might require the changing of maximum allowed connection on the database server side.

  • reaperPeriod (boolean)

    Default value: 60

    The number of seconds how often the expired entries should be removed.

  • connectionAutomaticDbSchemaSetup (boolean)

    Default value: true

    If set to "true", nevisAuth will automatically try to create the table used to store the data (with the CREATE TABLE IF NOT EXISTS syntax, as shown in the sample code snippet above).

    Set this property to "false", if you want to handle this differently, for example because you have different data sizing requirements. Also set the property to "false", if you did not specify the schemaUser or if the specified user does not have the required CREATE access rights.

The next code block shows an example configuration to be added in the esauth4.xml.

<RemoteOutOfContextDataStore
connectionUrl="jdbc:mariadb://localhost:3306/OOCD?autocommit=true"
connectionUser="OOCDdatauser"
connectionPassword="password"
connectionSchemaUser="OOCD1schemauser"
connectionSchemaPassword="password"
connectionAutomaticDbSchemaSetup="true"/>

Below, find another example where we created the database table manually and reused the NSS database and the nss_auth user from the remote session store. Note that the connectionSchemaUser falls back to the connectionUser. So if the connectionUser has no CREATE rights, you have to disable the property connectionAutomaticDbSchemaSetup.

<RemoteOutOfContextDataStore
connectionUrl="jdbc:postgresql://localhost:5432/nss"
connectionUser="nss_auth"
connectionPassword="password"
connectionAutomaticDbSchemaSetup="false"/>

Resilient database setup using MariaDB

This chapter describes how to set up the SqlOOCDService in order for it to be tolerant towards database or network outages between nevisAuth and MariaDB. At least one database node must be available to prevent application failure.

Use cases

Failure

On the primary DB node, failure connections will move to the secondary DB node when a 30 second timeout expires, or immediately if there is an incoming request to the DB. You may experience increased response time for the duration of the switch.

Recovery

On the primary DB node, recovery connections will move back to the primary DB node once the connection maximum lifetime expires. This is after 30 minutes.

Implementation overview

Regular clustering solutions provide all fault-tolerant features themselves. Therefore, the connecting application has no clue about the resilient setup. The suggested solution takes a different approach. In this setup, the connecting application becomes part of the resilient setup in terms of configuration.

Two key features to achieve resilience
  • Connectivity (MariaDB JDBC driver)

    • Configure a JDBC URL, where you define the DB nodes in priority order:

      jdbc:mariadb:sequential//host-db1:3306,host-db2:3306/OOCD
  • Data consistency (MariaDB replication)

    • Configure a master-to-master replication.
info

Replication is done by the database. The application and the JDBC driver are not aware of it at all.

Replication

Overview of database users

The replicated OOCD is managed by several database users to separate concerns. The creation of the users is explained below.

  • replication_user

    Required permissions: REPLICATION SLAVE, to be able to replicate data.

    Account used by the slave node to log into the master node.

  • binarylog_user

    Required permissions: SUPER permission, to be able to purge the binary logs.

    Account used for binary log management.

Step-by-step setup of the replicated session store

This chapters assumes that the DB setup is already completed.

  1. Creation of the replication user:

    CREATE USER IF NOT EXISTS replication_user IDENTIFIED BY 'replicationpassword';
    GRANT REPLICATION SLAVE ON *.* TO replication_user;
  2. Creation of the binary logs user:

    CREATE USER IF NOT EXISTS binarylog_user IDENTIFIED BY 'binarylogspassword';
    GRANT SUPER ON *.* TO binarylog_user;
  3. Configuration of the MariaDB service.

    To configure the MariaDB service, add the following entries to the file /etc/my.cnf as super user. The two configuration files (host-db1 and host-db2) differ at some points. The different lines are marked with (*).

  • Configure the MariaDB service on host-db1:

    [mariadb]
    # Enabling binary log
    log-bin
    # The ID of this master (*)
    server_id=1
    # The ID of the replication stream created by this master (*)
    gtid-domain-id=1
    # The basename and format of the binary log
    log-basename=mariadbmaster
    binlog-format=MIXED
    # Setting which tables are replicated
    replicate_wild_do_table="OOCD.nevisauth_out_of_context_data_service"
    # Avoiding collisions of primary IDs for tables where the primary ID is auto-incremented
    # Auto-increment value
    auto_increment_increment=2
    # Auto-increment offset (*)
    auto_increment_offset=1
    # Suppressing duplicated keys errors for multi-master setup
    slave_exec_mode=IDEMPOTENT
    # Ignoring some data definition language errors
    slave-skip-errors=1452, 1062
    # Suppressing binary logs after a delay regardless of the replication status
    expire_logs_days=1
    # Maximum number of connections
    max_connections=1000
    # Size of each of the binary log files (default: 1GB)
    max_binlog_size=500M
    # Enabling writing to the DB in parallel threads for the replication
    slave-parallel-threads=10
    # enabling semi-synchronous replication
    rpl_semi_sync_master_enabled=ON
    rpl_semi_sync_slave_enabled=ON
    # change to READ COMMITTED
    transaction-isolation=READ-COMMITTED
  • Configure the MariaDB service on host-db2:

    [mariadb]
    # Enabling binary log
    log-bin
    # The ID of this master (*)
    server_id=2
    # The ID of the replication stream created by this master (*)
    gtid-domain-id=2
    # The basename and format of the binary log
    log-basename=mariadbmaster
    binlog-format=MIXED
    # Setting which tables are replicated
    replicate_wild_do_table="OOCD.nevisauth_out_of_context_data_service"
    # Avoiding collisions of primary IDs for tables where the primary ID is auto-incremented
    # Auto-increment value
    auto_increment_increment=2
    # Auto-increment offset (*)
    auto_increment_offset=2
    # Suppressing duplicated keys errors for multi-master setup
    slave_exec_mode=IDEMPOTENT
    # Ignoring some data definition language errors
    slave-skip-errors=1452, 1062
    # Suppressing binary logs after a delay regardless of the replication status
    expire_logs_days=1
    # Maximum number of connections
    max_connections=1000
    # Size of each of the binary log files (default: 1GB)
    max_binlog_size=500M
    # Enabling writing to the DB in parallel threads for the replication
    slave-parallel-threads=10
    # enabling semi-synchronous replication
    rpl_semi_sync_master_enabled=ON
    rpl_semi_sync_slave_enabled=ON
    # change to READ COMMITTED
    transaction-isolation=READ-COMMITTED
  • Restart the MariaDB servers on both hosts:

    sudo service mariadb restart
Semi-synchronous replication

By default, MariaDB uses asynchronous replication. To reach more consistency, it is recommended using semi-synchronous replication. The database configurations previously shown enable semi-synchronous replication with the following lines:

rpl_semi_sync_master_enabled=ON
rpl_semi_sync_slave_enabled=ON
info

MariaDB versions before 10.3.3 require the installation of plug-ins for semi-synchronous replication and are not supported.

Start-up of the replication

To start the replication, log in as root into your MariaDB client and run the following commands:

  • on host-db1 (master is host-db2):

    CHANGE MASTER TO
    MASTER_HOST='host-db2',
    MASTER_USER='replication_user',
    MASTER_PASSWORD='replicationpassword',
    MASTER_PORT=3306,
    MASTER_USE_GTID=current_pos,
    MASTER_CONNECT_RETRY=10;
  • on host-db2 (master is host-db1):

    CHANGE MASTER TO
    MASTER_HOST='host-db1',
    MASTER_USER='replication_user',
    MASTER_PASSWORD='replicationpassword',
    MASTER_PORT=3306,
    MASTER_USE_GTID=current_pos,
    MASTER_CONNECT_RETRY=10;
  • on host-db1:

    START SLAVE;
  • on host-db2:

    START SLAVE;

Additional setup

Purging the binary logs

With the provided configuration (expire_logs_days=1 in the MariaDB settings), the system will automatically remove the binary logs that are older than one day, even if the logs were not copied by the slave. This prevents the disk of the master node from being filled up in case the slave is down for a long time. The automatic binary log removal takes place when

  • the master DB node starts,
  • the logs are flushed (nevisAuth does not use this feature),
  • the binary log rotates, or
  • the binary logs are purged manually (see below).

So binary logs older than one day may exist, if none of the listed actions occurred recently.

Complementary to this expiration feature, MariaDB provides the possibility to manually purge the binary logs. The purge action removes all binary logs that were already copied by the slave. This allows a safe removal of the binary logs on a regular basis. The nevisProxy package is delivered with an adaptable purging script, which is located at: /opt/nevisproxy/sql/mariadb/purgebinarylogs.sh

To use this script,

  • copy the script to a location of your choice, and
  • adapt it to your configuration.

The script takes care of both DB nodes, so that it only needs to be configured once.

info

Note that if different database server nodes are used for nevisProxy and nevisAuth, you have set them up separately.

You can schedule the script to run for example once per hour, with a cron job:

0 * * * * /var/opt/nevisproxy/instance/conf/purgebinarylogs.sh # Absolute path of your adapted script
Size the binary logs

The provided configuration (max_binlog_size=500M in the MariaDB settings) allows you to configure the maximum size of the binary log files before rotating. The smaller the size, the more often rotations will occur, which will slow down replication. The advantage is a more efficient purge process. The bigger the size, the less often rotations will occur, but the disk may be filled with old logs.

According to our experiences, a size less than 8K does stop replication completely under heavy load, because the slave keeps rotating the logs.

Troubleshooting

Usually the slave stops the replication if an error occurs. You can check the state of the slave with the following SQL command:

show slave status\G

Note that showing the slave status requires the REPLICATION CLIENT grant.

If the replication has stopped, usually the error that caused it will be displayed. First you should try to fix the error. If this is not possible, you can do a "forced" restart of the slave like this:

  • On the master call (to display the current state of the master):

    MariaDB [replicated_session_store]> show master status\G
    *************************** 1. row ***************************
    File: mariadbmaster-bin.000131
    Position: 194630804
    Binlog_Do_DB:
    Binlog_Ignore_DB:
    1 row in set (0.00 sec)
  • On the slave (using the values returned by the call "show master status\G" on the master):

    STOP SLAVE;
    CHANGE MASTER TO
    MASTER_LOG_FILE='mariadbmaster-bin.000131',
    MASTER_LOG_POS=194630804;
    START SLAVE;

In this way, the system will restart the slave, without replicating to the slave all the sessions that occurred from the moment the replication has stopped until now.

Resilient database setup using PostgreSQL

Currently not supported.