Skip to main content
Skip table of contents

Admin: Appendix A - Database preparation

Overview

This section describes the steps to be undertaken to ensure that databases are prepared for use by RPI as a data warehouse or auxiliary database.

Google BigQuery ODBC driver configuration

This section describes how to configure Google BigQuery using the Simba ODBC driver configured in Redpoint Interaction docker based image.

Please follow the steps below:

  1. In your docker-compose.yml file, ensure that following entries below are defined.

YAML
volumes:
          -./config/odbc:/app/odbc-config
          -./config/odbc/google-creds:/app/odbc-google-creds-config
  1. Create a new or edit an existing odbc.ini located at the ./config local directory:

    • Set the BigQuery odbc name and provide credentials.

    • Set the Driver parameter: Driver=/app/odbc-lib/bigquery/SimbaODBCDriverforGoogleBigQuery64/lib/libgooglebigqueryodbc_sb64.so.

e.g.:

CODE
[ODBC]
Trace=no

[ODBC Data Sources]
gbq=Simba ODBC Driver for Google BigQuery 64-bit

[gbq]
Description=Simba ODBC Driver for Google BigQuery (64-bit) DSN
Driver=/app/odbc-lib/bigquery/SimbaODBCDriverforGoogleBigQuery64/lib/libgooglebigqueryodbc_sb64.so
Catalog=[ENVIRONMENTAL SETTING]
DefaultDataset=[ENVIRONMENTAL SETTING]
SQLDialect=1
OAuthMechanism=1
RefreshToken=[ENVIRONMENTAL SETTING]
KeyFilePath=[ENVIRONMENTAL SETTING]
TrustedCerts=/app/odbc-lib/bigquery/SimbaODBCDriverforGoogleBigQuery64/lib/cacerts.pem
AllowLargeResults=0
Min_TLS=1.2
DefaultStringColumnLength=16384
LargeResultsTempTableExpirationTime=3600000
RowsFetchedPerBlock=100000
Timeout=300000
SSL=0

To check if the ODBC driver is connected, execute isql [ODBC_NAME] -v in the execution service CLI.

  1. Request a JSON Google service account key file, or create one using this guide.

  2. Place the JSON key file inside /config/odbc/google-creds.

  3. In the Pulse database, update the ConnectionConfiguration field in the rpi_Clients table for the BigQuery client: {"ProjectId":"[GOOGLE_PROJECT_ID]","ServiceAccountJsonPath":":/app/odbc-google-creds-config/[GOOGLE_SERVICE_ACCOUNT_KEYFILE].json"}.

Google BigTable configuration

This section describes how to configure Google BigTable. Please follow the steps below:

Obtaining a BigTable instance ID

  1. In a web browser, navigate to https://console.cloud.google.com to log into the Google Cloud console.

  2. Once you have successfully logged on, you will be redirected to the portal’s main page.

  3. In the dashboard navigation menu, select BigTable.

  4. Once the BigTable interface is displayed, take note of the BigTable Instance ID, as this will be used to configure Google BigQuery.

Obtaining a Google Cloud project ID

  1. While in the Google Cloud console, select the project selector to the right side of the Google Cloud Platform interface.

  2. Once the project selector is displayed, take note of the currently-selected project ID, as this will be used to configure Google BigQuery.

Creating a Google BigQuery dataset

  1. Open the BigQuery page in the Google Cloud console.

  2. In the Explorer panel, select the project where you want to create the dataset.

  3. Expand the three dot action menu option and select Create dataset.

  4. On the Create dataset page:

    • For Dataset ID, enter a unique dataset name.

    • For Data location, choose a geographic location for the dataset. After a dataset is created, the location can't be changed.

  5. Select Create dataset.

Creating a Google BigTable table

  1. Open the list of Bigtable instances in the Google Cloud console.

  2. Select the instance whose tables you want to view.

  3. Select Tables in the left pane. The Tables page displays a list of tables in the instance.

  4. Select Create a table.

  5. Enter a table ID for the table.

  6. Add column families (optional).

  7. Select Create.

Creating a Google BigQuery table

  1. In the Google Cloud console, go to the BigQuery page.

  2. In the query editor, enter the following statement.

SQL
CREATE EXTERNAL TABLE DATASET.NEW_TABLE
OPTIONS (
  format = 'CLOUD_BIGTABLE',
  uris = ['URI'],
  bigtable_options = BIGTABLE_OPTIONS );
  1. Replace the following:

    • DATASET: dataset name created at Google BiqQuery.

    • NEW_TABLE: table Id from the Google BigTable created.

    • URI: https://googleapis.com/bigtable/projects/[YOUR-PROJECT-ID]/instances/[YOUR-BIGTABLE-INSTACEID]/tables/YOUR-BIGTABLE-TABLE-ID.

    • BIGTABLE_OPTIONS: the schema for the BigTable table in JSON format.

JSON
CREATE EXTERNAL TABLE DATASET.NEW_TABLE
OPTIONS (
  format = 'CLOUD_BIGTABLE',
  uris = ['https://googleapis.com/bigtable/projects/[PROJECT-ID]/instances/[INSTANCE-ID]/tables/[TABLE-ID]'],
  bigtable_options = """
    {
      bigtableColumnFamilies: [
        {
          "familyId": "[FamilyName]",
          "type": "INTEGER",
          "encoding": "BINARY"
        }
      ],
      readRowkeyAsString: true
    }
    """
);
  1. Select RUN.

Installing Google BigQuery ODBC driver for BigTable

Google BigTable database connector required Google BigQuery ODBC driver, please refer to the Google BigQuery ODBC driver configuration section above. Once all set, you may want to test the connection by selecting Test. Otherwise, select OK to save your settings.

Azure MySQL ODBC configuration

This section describes how to create and configure a Azure MySQL Data Source Name (DSN) running in a container environment. Please follow the steps below.

  1. In your container, launch a command line.

  2. Type the commands to install the latest MySQL ODBC database driver.

CODE
> apt-get update
> apt-get upgrade
> apt-get install -y wget #install wget lib
> cd /
> mkdir download
> cd download
> wget https://dev.mysql.com/get/Downloads/Connector-ODBC/8.0/mysql-connector-odbc-8.0.32-linux-glibc2.28-x86-64bit.tar.gz
> tar -xvf mysql-connector-odbc-8.0.32-linux-glibc2.28-x86-64bit.tar.gz
> mkdir /usr/local/mysql
> cp -R mysql-connector-odbc-8.0.32-linux-glibc2.28-x86-64bit/lib/ /usr/local/mysql
> cp -R mysql-connector-odbc-8.0.32-linux-glibc2.28-x86-64bit/bin/ /usr/local/mysql
  1. Under \odbc\config directory, create the following files below: odbc.ini and odbcinst.ini.

You may use notepad editor or other preferred text editor to accomplish this.

  1. Add the following entries below in odbc.ini and save the file.

CODE
;
;  odbc.ini configuration for Connector/ODBC 8.0 driver
;

[ODBC Data Sources]
myodbc8w     = MyODBC 8.0 UNICODE Driver DSN
myodbc8a     = MyODBC 8.0 ANSI Driver DSN

[myodbc8w]
Driver       = /usr/local/mysql/lib/libmyodbc8w.so
Description  = Connector/ODBC 8.0 UNICODE Driver DSN
SERVER       =<REPLACE THIS WITH YOUR MYSQL SERVER NAME>
PORT         = <REPLACE THIS WITH YOUR MYSQL PORT NUMBER. Default value is 3306>
USER         =<REPLACE THIS WITH YOUR MYSQL USER NAME>
Password     = <REPLACE THIS WITH YOUR MYSQL PASSWORD>
Database     = <REPLACE THIS WITH YOUR MYSQL DATABASE NAME>
OPTION       = 3
SOCKET       =
SSLMODE      = REQUIRED
CHARSET      = utf8mb4
[myodbc8a]
Driver       = /usr/local/mysql/lib/libmyodbc8a.so
Description  = Connector/ODBC 8.0 ANSI Driver DSN
SERVER       =<REPLACE THIS WITH YOUR MYSQL SERVER NAME>
PORT         = <REPLACE THIS WITH YOUR MYSQL PORT NUMBER. Default value is 3306>
USER         =<REPLACE THIS WITH YOUR MYSQL USER NAME>
Password     = <REPLACE THIS WITH YOUR MYSQL PASSWORD>
Database     = <REPLACE THIS WITH YOUR MYSQL DATABASE NAME>
OPTION       = 3
SOCKET       =
SSLMODE      = REQUIRED
CHARSET      = utf8mb4
  1. Add the following entries below in odbcinst.ini and save the file.

CODE
[MyODBC 8.0 UNICODE Driver DSN]
Description=Connector/ODBC 8.0 UNICODE Driver DSN
Driver=libmyodbc8w.so
Setup=libmyodbc8w.so
Debug=0
CommLog=1
UsageCount=2

[MyODBC 8.0 ANSI Driver DSN]
Description=Connector/ODBC 8.0 ANSI Driver DSN
Driver=libmyodbc8a.so
Setup=libmyodbc8a.so
Debug=0
CommLog=1
UsageCount=2
  1. In your docker-compose.yml file, ensure that following entries below are defined.

YAML
volumes:
         -./config/odbc:/app/odbc-config
  1. When provisioning new Redpoint Interaction client, use the myodbc8w as the data source name.

  2. Restart the container.

Databricks ODBC driver configuration

This section describes how to configure Databricks using the Simba Apache Spark ODBC driver. Please follow the steps below:

  1. In your docker-compose.yml file, ensure that following entries below are defined.

YAML
volumes:                   
         -./config/odbc:/app/odbc-config
  1. Create new or edit existing odbc.ini located at ./config local directory:

    • Set the Databricks ODBC name

    • Set Driver=/app/odbc-lib/simba/spark/lib/libsparkodbc_sb64.so

    • Provide credentials, e.g.

CODE
[ODBC]
Trace=no

[ODBC Data Sources]
databrick=Simba Apache Spark ODBC Connector

[databrick]
Driver=/app/odbc-lib/simba/spark/lib/libsparkodbc_sb64.so
SparkServerType=3
Host=[ENVIRONMENTAL SETTING]
Port=443
SSL=1
Min_TLS=1.2
ThriftTransport=2
UID=token
PWD=[ENVIRONMENTAL SETTING]
AuthMech=3
TrustedCerts=/app/odbc-lib/simba/spark/lib/cacerts.pem
UseSystemTrustStore=0
HTTPPath=/sql/1.0/warehouses/c546e1e69e8d2ac9

User credentials: Personal access token

  1. Log in using the corp account.

  2. Create your own personal access token from User Settings>User>Developer>Access Token.

  3. Use “token” as the username and then for the password enter you personal access token.

Interaction configuration: Schema

Schema = <catalog>.<schema>

Additional resources

AWS Redshift configuration

This section describes how to create and configure an AWS Redshift Data Source Name (DSN). Please follow the steps below. Please note that you may skip steps from 1–4 if you have already installed the ODBC driver.

  1. In a web browser, navigate to https://docs.aws.amazon.com/redshift/latest/mgmt/install-odbc-driver-windows.html to download the driver.

  2. Locate https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.5.9.1011/AmazonRedshiftODBC64-1.5.9.1011.msi to download the 64bit Amazon Redshift ODBC installer.

  3. In the download folder, double-click the AmazonRedshiftODBC64-1.5.9.1011.msi file.

  4. In the Amazon Redshift ODBC Driver 64-bit Setup Window, select Next and follow the required steps to install the driver.

  5. Once you have successfully installed the ODBC driver, go to Control Panel\All Control Panel Items\Administrative Tools and select Data Sources (ODBC).

  6. In the ODBC Data Source Administrator Window, select the System DSN tab.

  7. Choose the Add... button to create a new Data Source.

  8. Find and select Amazon Redshift (x64) and select Finish.

  9. In the Amazon Redshift ODBC Driver DSN Setup Window, configure the following details:

    • Data Source Name: the name of the data source.

    • Server: the Amazon Redshift cluster endpoint URL, i.e., "xxx.xxx.endpointregion.redshift.amazonaws.com".

    • Port: the cluster port number. Only numerical values are supported. The default is 5439.

    • Database: the database’s name.

    • Auth Type: must be set to "Standard".

    • User: the database username.

    • Password: the database password.

    • Encrypt Password For: must be set to "All Users of This Machine".

    • Additional Options: change the radio button selection to Use "Multiple Statements".

  10. Select the Test button. Once the connection has been made successfully, choose the OK button to create the DSN.

  11. Once completed, launch Server Workbench and login.

  12. In the Install Client at Data warehouse pane, select AWS Redshift as the database provider.

    1. Enter the Server name. This can be either the DNS (Domain Name System) or an IP address of the server.

    2. Enter the Database provider.

    3. Enter the Data Source Name.

    4. Enter the Database schema.

  13. Check the End User License Agreement checkbox and then select Next.

SQL Server configuration

This section describes how to configure a SQL Server connection for a specific Redpoint Interaction tenant running within a container application. Please follow the steps below:

  1. In the Configuration Editor’s Data Warehouse Settings section, select the SQLServer provider.

  2. Enter the required parameters such as Server, Database Name, Username and Password.

image-20241211-094827.png
  1. Specify optional parameters (e.g. per the SQL Server Settings section if required).

  2. Copy the JSON text template to provision the new tenant.

image-20241211-095031.png
  1. Once the tenant has been provisioned, a sample connection string is generated.

YAML
Server=localhost;Database=AdventureWorks;UID=sa;PWD=******;
Encrypt=True;TrustServerCertificate=True;ConnectRetryCount=3;ConnectRetryInterval=10
  1. For more information about SQL Server connection string parameters, please see https://learn.microsoft.com/en-us/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-standard-5.2.

AzureSQL configuration

This section describes how to configure an AzureSQL connection for a specific Redpoint Interaction tenant running within a container application. Please follow the steps below:

  1. In the Configuration Editor’s Data Warehouse Settings section, select the AzureSQLDatabase provider.

  2. Enter the required parameters such as Server, Database Name, Username and Password.

image-20241211-095422.png
  1. Specify optional parameters (e.g. per the SQL Server Settings section if required).

  2. Copy the JSON text template to provision the new tenant.

image-20241211-095505.png
  1. Once the tenant has been provisioned, a sample connection string is generated.

YAML
Server=acme_inc.database.windows.net,1433;Database=AdventureWorks;User ID=sa;
Password=*****;Trusted_Connection=False;Encrypt=True;Connection Timeout=0;
ConnectRetryCount=3;ConnectRetryInterval=10;TrustServerCertificate=True
  1. For more information about AzureSQL connection string parameters, please see https://learn.microsoft.com/en-us/azure/azure-sql/database/connect-query-content-reference-guide?view=azuresql.

PostgreSQL configuration

This section describes how to configure a PostgreSQL connection for a specific Redpoint Interaction tenant running within a container application. Please follow the steps below:

  1. In the Configuration Editor’s Data Warehouse Settings section, select the PostgreSQL provider.

  2. Enter the required parameters such as Server, Database Name, Username and Password.

image-20241211-114229.png
  1. Copy the JSON text template to provision the new tenant.

image-20241211-114314.png
  1. Once the tenant has been provisioned, a sample connection string is generated.

YAML
Server=localhost;Database=AdventureWorks;Port=1532;User Id=postgres;
Password=*****;SslMode=Disable
  1. For more information about PostgreSQL connection string parameters, please see https://www.npgsql.org/doc/connection-string-parameters.html.

Azure Database for PostgreSQL configuration

This section describes how to configure an Azure Database for PostgreSQL connection for a specific Redpoint Interaction tenant running within a container application. Please follow the steps below:

  1. In the Configuration Editor’s Data Warehouse Settings section, select the AzureDatabasePostgreSQL provider.

  2. Enter the required parameters such as Server, Database Name, Username and Password.

image-20241211-115134.png
  1. Copy the JSON text template to provision the new tenant.

image-20241211-115201.png
  1. Once the tenant has been provisioned, a sample connection string is generated.

YAML
Server=client_db.postgres.database.azure.com;Database=AdventureWorks;
Port=5432;User Id=postgres;Password=*****;SslMode=Disable
  1. For more information about Azure Database for PostgreSQL connection string parameters, please see https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/connect-csharp and https://www.npgsql.org/doc/connection-string-parameters.html.

Teradata configuration

This section describes how to configure a Teradata connection for a specific Redpoint Interaction tenant running within a container application. Please follow the steps below:

  1. In the Configuration Editor’s Data Warehouse Settings section, select the Teradata provider.

  2. Enter the required parameters such as Server, Database Name, Username and Password.

image-20241211-115825.png
  1. Copy the JSON text template to provision the new tenant.

image-20241211-115850.png
  1. Once the tenant has been provisioned, a sample connection string is generated.

YAML
Data Source=acme_inc.teradata.com;User ID=user1;Password=*****
  1. For more information about Teradata connection string parameters, please see https://teradata-docs.s3.amazonaws.com/doc/connectivity/tdnetdp/20.00/help/webframe.html.

Snowflake configuration

This section describes how to configure Snowflake connection for a specific Redpoint Interaction tenant running within a container application. Please follow the steps below:

  1. In the Configuration Editor’s Data Warehouse Settings section, select the Snowflake provider.

  2. Enter the required parameters such as Server, Database Name, Username and Password. You will need to provide a value for the Host Name parameter if Snowflake is configured in a cluster other than the default.

image-20241211-120151.png
  1. Copy the JSON text template to provision the new tenant.

image-20241211-120237.png
  1. Once the tenant has been provisioned, a sample connection string is generated.

YAML
account=acme;user=user1;password=*****db=AdventureWorks
  1. For more information about Snowflake connection string parameters, please see https://github.com/snowflakedb/snowflake-connectornet/blob/master/doc/Connecting.md.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.