Skip to main content
Skip table of contents

Admin: Appendix A - Database preparation

Overview

This section describes the steps to be undertaken to ensure that databases are prepared for use by RPI as a data warehouse or auxiliary database.

Google BigQuery ODBC driver configuration

This section describes how to configure Google BigQuery using the Simba ODBC driver configured in Redpoint Interaction docker based image.  Please follow the steps below:

  • In your docker-compose.yml file, ensure that following entries below are defined:

CODE
volumes:
          -./config/odbc:/app/odbc-config
          -./config/odbc/google-creds:/app/odbc-google-creds-config
  • Create a new or edit an existing odbc.ini located at the  ./config local directory:

    • Set the BigQuery odbc name and provide credentials

    • Set the Driver parameter

Driver=/app/odbc-lib/bigquery/SimbaODBCDriverforGoogleBigQuery64/lib/libgooglebigqueryodbc_sb64.so

e.g.:

CODE
[ODBC]
Trace=no

[ODBC Data Sources]
gbq=Simba ODBC Driver for Google BigQuery 64-bit

[gbq]
Description=Simba ODBC Driver for Google BigQuery (64-bit) DSN
Driver=/app/odbc-lib/bigquery/SimbaODBCDriverforGoogleBigQuery64/lib/libgooglebigqueryodbc_sb64.so
Catalog=[ENVIRONMENTAL SETTING]
DefaultDataset=[ENVIRONMENTAL SETTING]
SQLDialect=1
OAuthMechanism=1
RefreshToken=[ENVIRONMENTAL SETTING]
KeyFilePath=[ENVIRONMENTAL SETTING]
TrustedCerts=/app/odbc-lib/bigquery/SimbaODBCDriverforGoogleBigQuery64/lib/cacerts.pem
AllowLargeResults=0
Min_TLS=1.2
DefaultStringColumnLength=16384
LargeResultsTempTableExpirationTime=3600000
RowsFetchedPerBlock=100000
Timeout=300000
SSL=0
  • To check if the ODBC driver is connected, execute ‘isql [ODBC_NAME] -v’ in the execution service CLI.

  • Request a JSON Google service account key file, or create one using this guide:

https://cloud.google.com/iam/docs/creating-managing-service-account-keys#iam-service-account-keys-create-gcloud

  • Place the JSON key file inside ‘/config/odbc/google-creds’.

  • In the Pulse database, update the ConnectionConfiguration field in the rpi_Clients table for the BigQuery client:

{"ProjectId":"[GOOGLE_PROJECT_ID]","ServiceAccountJsonPath":":/app/odbc-google-creds-config/[GOOGLE_SERVICE_ACCOUNT_KEYFILE].json"}

Google BigTable configuration

This section describes how to configure Google BigTable. Please follow the steps below:

Obtaining a BigTable instance ID

  1. In a web browser, navigate to https://console.cloud.google.com to log into the Google Cloud console.

  2. Once you have successfully logged on, you will be redirected to the portal’s main page.

  3. In the dashboard navigation menu, click BigTable.

  4. Once the BigTable interface is displayed, take note of the BigTable Instance ID, as this will be used to configure Google BigQuery.

Obtaining a Google Cloud project ID

  1. While in the Google Cloud console, click the project selector to the right side of the Google Cloud Platform interface.

  2. Once the project selector is displayed, take note of the currently-selected project ID, as this will be used to configure Google BigQuery.

Creating a Google BigQuery dataset

  1. Open the BigQuery page in the Google Cloud console. (https://console.cloud.google.com/bigquery)

  2. In the Explorer panel, select the project where you want to create the dataset.

  3. Expand the three dot action menu option and click Create dataset.

  4. On the Create dataset page:

    1. For Dataset ID, enter a unique dataset name.

    2. For Data location, choose a geographic location for the dataset. After a dataset is created, the location can't be changed.

    3. Click Create dataset.

Creating a Google BigTable table 

  1. Open the list of Bigtable instances in the Google Cloud console. (https://console.cloud.google.com/bigtable/instances). 

  2. Click the instance whose tables you want to view. 

  3. Click Tables in the left pane. 

  4. The Tables page displays a list of tables in the instance. 

  5. Click Create a table

  6. Enter a table ID for the table. 

  7. Add column families (optional). 

  8. Click Create

Creating a Google BigQuery table 

  1. In the Google Cloud console, go to the BigQuery page. 

(https://console.cloud.google.com/bigquery?) 

  1. In the query editor, enter the following statement: 

CODE
CREATE EXTERNAL TABLE DATASET.NEW_TABLE
OPTIONS (
  format = 'CLOUD_BIGTABLE',
  uris = ['URI'],
  bigtable_options = BIGTABLE_OPTIONS );

Replace the following: 

CODE
CREATE EXTERNAL TABLE DATASET.NEW_TABLE
OPTIONS (
  format = 'CLOUD_BIGTABLE',
  uris = ['https://googleapis.com/bigtable/projects/[PROJECT-ID]/instances/[INSTANCE-ID]/tables/[TABLE-ID]'],
  bigtable_options = """
    {
      bigtableColumnFamilies: [
        {
          "familyId": "[FamilyName]",
          "type": "INTEGER",
          "encoding": "BINARY"
        }
      ],
      readRowkeyAsString: true
    }
    """
);
  1. Click RUN 

Installing Google BigQuery ODBC driver for BigTable 

Google BigTable database connector required Google BigQuery ODBC driver, please refer to the Google BigQuery ODBC driver configuration. Once all set, you may want to test the connection by clicking Test. Otherwise, click OK to save your settings.

PostgreSQL ODBC configuration

This section describes how to create and configure a PostgreSQL Data Source Name (DSN) running in a container environment. Please follow the steps below.

  1. In your container, launch a command line.

  2. Type the command apt-get install odbc-postgresql to install the latest PostgreSQL ODBC database driver.

  3. Once you have successfully installed the ODBC driver in a container, go to the container’s root directory. Locate odbc directory under \config directory. If it is does not exist yet, create one.

  4. Under \odbc\config directory, create the following files below.

odbc.ini and odbcinst.ini

You may use notepad editor or other preferred text editor to accomplish this.

  1. Add the following entries below in odbc.ini and save the file.

CODE
[psql]
Description=PostgreSQL
Driver=PostgreSQL Unicode
Trace=No
TraceFile=/tmp/psqlodbc.log
Database=<REPLACE THIS WITH YOUR POSTGRESQL DATABASE NAME>
Servername=<REPLACE THIS WITH YOUR POSTGRESQL SERVER NAME>
UserName=<REPLACE THIS WITH YOUR POSTGRESQL USER NAME>
Password=<REPLACE THIS WITH YOUR POSTGRESQL PASSWORD>
Port=<REPLACE THIS WITH YOUR POSTGRESQL PORT NUMBER. Default value is 5432>
ReadOnly=No
RowVersioning=No
ShowSystemTables=No
ShowOidColumn=No
FakeOidIndex=No
  1. Add the following entries below in odbcinst.ini and save the file.

CODE
[PostgreSQL ANSI]
Description=PostgreSQL ODBC driver (ANSI version)
Driver=psqlodbca.so
Setup=libodbcpsqlS.so
Debug=0
CommLog=1
UsageCount=2
CODE
[PostgreSQL Unicode]
Description=PostgreSQL ODBC driver (Unicode version)
Driver=psqlodbcw.so
Setup=libodbcpsqlS.so
Debug=0
CommLog=1
UsageCount=2
  1. In your docker-compose.yml file, ensure that following entries below are defined:

CODE
volumes:
         -./config/odbc:/app/odbc-config
  1. When provisioning new Redpoint Interaction client, use the psql as the data source name.

Azure MySQL ODBC configuration

This section describes how to create and configure a Azure MySQL Data Source Name (DSN) running in a container environment. Please follow the steps below.

  1. In your container, launch a command line.

  2. Type the commands to install the latest MySQL ODBC database driver.

CODE
> apt-get update
> apt-get upgrade
> apt-get install -y wget #install wget lib
> cd /
> mkdir download
> cd download
> wget https://dev.mysql.com/get/Downloads/Connector-ODBC/8.0/mysql-connector-odbc-8.0.32-linux-glibc2.28-x86-64bit.tar.gz
> tar -xvf mysql-connector-odbc-8.0.32-linux-glibc2.28-x86-64bit.tar.gz
> mkdir /usr/local/mysql
> cp -R mysql-connector-odbc-8.0.32-linux-glibc2.28-x86-64bit/lib/ /usr/local/mysql
> cp -R mysql-connector-odbc-8.0.32-linux-glibc2.28-x86-64bit/bin/ /usr/local/mysql
  1. Under \odbc\config directory, create the following files below.

odbc.ini and odbcinst.ini

You may use notepad editor or other preferred text editor to accomplish this.

  1. Add the following entries below in odbc.ini and save the file.

CODE
;
;  odbc.ini configuration for Connector/ODBC 8.0 driver
;

[ODBC Data Sources]
myodbc8w     = MyODBC 8.0 UNICODE Driver DSN
myodbc8a     = MyODBC 8.0 ANSI Driver DSN

[myodbc8w]
Driver       = /usr/local/mysql/lib/libmyodbc8w.so
Description  = Connector/ODBC 8.0 UNICODE Driver DSN
SERVER       =<REPLACE THIS WITH YOUR MYSQL SERVER NAME>
PORT         = <REPLACE THIS WITH YOUR MYSQL PORT NUMBER. Default value is 3306>
USER         =<REPLACE THIS WITH YOUR MYSQL USER NAME>
Password     = <REPLACE THIS WITH YOUR MYSQL PASSWORD>
Database     = <REPLACE THIS WITH YOUR MYSQL DATABASE NAME>
OPTION       = 3
SOCKET       =
SSLMODE      = REQUIRED
CHARSET      = utf8mb4
[myodbc8a]
Driver       = /usr/local/mysql/lib/libmyodbc8a.so
Description  = Connector/ODBC 8.0 ANSI Driver DSN
SERVER       =<REPLACE THIS WITH YOUR MYSQL SERVER NAME>
PORT         = <REPLACE THIS WITH YOUR MYSQL PORT NUMBER. Default value is 3306>
USER         =<REPLACE THIS WITH YOUR MYSQL USER NAME>
Password     = <REPLACE THIS WITH YOUR MYSQL PASSWORD>
Database     = <REPLACE THIS WITH YOUR MYSQL DATABASE NAME>
OPTION       = 3
SOCKET       =
SSLMODE      = REQUIRED
CHARSET      = utf8mb4
  1. Add the following entries below in odbcinst.ini and save the file.

CODE
[MyODBC 8.0 UNICODE Driver DSN]
Description=Connector/ODBC 8.0 UNICODE Driver DSN
Driver=libmyodbc8w.so
Setup=libmyodbc8w.so
Debug=0
CommLog=1
UsageCount=2

[MyODBC 8.0 ANSI Driver DSN]
Description=Connector/ODBC 8.0 ANSI Driver DSN
Driver=libmyodbc8a.so
Setup=libmyodbc8a.so
Debug=0
CommLog=1
UsageCount=2
  1. In your docker-compose.yml file, ensure that following entries below are defined:

CODE
volumes:
         -./config/odbc:/app/odbc-config
  1. When provisioning new Redpoint Interaction client, use the myodbc8w as the data source name.

  2. Restart the container.

Snowflake ODBC driver configuration

This section describes how to configure Snowflake using the Simba ODBC driver. Please follow the steps below:

  1. In your docker-compose.yml file, ensure that following entries below are defined:

CODE
volumes:                   
         -./config/odbc:/app/odbc-config
  1. Create new or edit existing odbc.ini located at ./config local directory:

    • Set the Snowflake odbc name

    • Set Driver=/app/odbc-lib/snowflake_odbc/lib/libSnowflake.so

    • Provide credentials.
      e.g.

CODE
[ODBC]
Trace=no

[ODBC Data Sources]
snowflakeDSN=SnowflakeDSIIDriver

[snowflakeDSN]
Description=SnowflakeDB
Driver=/app/odbc-lib/snowflake_odbc/lib/libSnowflake.so
Locale=en-US
SERVER=[ENVIRONMENTAL SETTING]
PORT=443
SSL=on
role=[ENVIRONMENTAL SETTING]
database=[ENVIRONMENTAL SETTING]
schema=[ENVIRONMENTAL SETTING]
warehouse=[ENVIRONMENTAL SETTING]
UID=[ENVIRONMENTAL SETTING]

Databricks ODBC driver configuration

This section describes how to configure Databricks using the Simba Apache Spark ODBC driver. Please follow the steps below:

  1. In your docker-compose.yml file, ensure that following entries below are defined:

CODE
volumes:                   
         -./config/odbc:/app/odbc-config
  1. Create new or edit existing odbc.ini located at ./config local directory:

    • Set the Databricks ODBC name.

    • Set Driver=/app/odbc-lib/simba/spark/lib/libsparkodbc_sb64.so

    • Provide credentials.
      e.g.

CODE
[ODBC]
Trace=no

[ODBC Data Sources]
databrick=Simba Apache Spark ODBC Connector

[databrick]
Driver=/app/odbc-lib/simba/spark/lib/libsparkodbc_sb64.so
SparkServerType=3
Host=[ENVIRONMENTAL SETTING]
Port=443
SSL=1
Min_TLS=1.2
ThriftTransport=2
UID=token
PWD=[ENVIRONMENTAL SETTING]
AuthMech=3
TrustedCerts=/app/odbc-lib/simba/spark/lib/cacerts.pem
UseSystemTrustStore=0
HTTPPath=/sql/1.0/warehouses/c546e1e69e8d2ac9
JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.