Overview
This page describes a methodology for automating the migration of RPI v6.x files (assets, smart assets, interactions, etc.) using the RPI Integration APIs and the File Copy Utility. The Migration.py script at the end of the document is the basis of this document and can be used either as is or with customizations to fit your needs.
There are 2 main sections:
-
Caveats: Warnings and things to keep in mind when trying to replicate this process.
-
Process: Which API calls are used and how the responses are parsed, following the structure of the
Migration.pyscript.
Caveats
-
The File Copy Utility is only available in RPI v6.x environments, so this process cannot apply for v7.x environments.
-
If you are not on RPI v6.6.2356 or higher, you will have to get the File Copy Utility from the newest deployment files.
-
This process is written in Python and a number of 3rd party Python libraries that must be installed alongside Python on the server running the process. This is not to say it would be impossible with another language, but others have not been tried.
-
This script does not handle paging of the APIs, so it is assumed your search criteria will not return over 20 results.
-
This process is designed to specifically run on Windows.
Process
Initialization
The first time the script is run it will create an .ini file. This needs to be populated with information for the rest of the script to run correctly. This includes the base URLs and clientIDs for the source and target environments.
-
sourceclientsecretis assumed to be the same in each environment, because it is often not configured away from the RPI default. -
fileutilitypathis the exact location of the File Copy Utility in your copy. -
The authorization user and password are also assumed to be the same across environments.
-
buildInI()is the function which builds the.iniand extracts its information.
Below is a copy of the .ini file.
[server]
sourceurlbase = https://local.rphelios.net/integrationapi6-6old/
sourceclientid = 12d7cfe3-d7fa-40a8-899b-8aed69d8242d
sourceclientsecret = rpiwebapiBF1C7B319E72457CB71EEC462D5BFB24
[credentials]
authmethod = password
authuser = coreuser
authpass = .Admin123
[utility]
fileutilitypath = C:\.local.RPI\RPI_DeploymentFiles-6.6.23256.1126---2023-0913\Utilities\FileCopy\RedPoint.Resonance.FileCopy.exe
[targetAPI]
targeturlbase = https://local.rphelios.net/integrationapi6-6old/
targetclientid = ecd5c006-d37a-41e9-b021-d721ca45b6a8
Authorization
authToken=getAuthToken(sourceClientSecret,authUser,authPass,sourceUrlBase,debug)
jsonRequestHeader=getAuthedRequestHeader(sourceClientId,authToken,debug)
After all relevant information is ready for the process, the first step is authorization. A GET call is made to the '/token' endpoint, which returns a bearer token in the access_token field of the Response.
url = urlBase + '/token'
jsonAuthenticationHeader = {
"accept": "application/json" ,
"Content-Type": "application/json"
}
jsonAuthenticationData = {
'client_id' : 'rpiwebapi',
'client_secret' : clientSecret,
'grant_type' : 'password',
'username' : authUser,
'password' : authPass,
}
With the bearer token, we are able to create a request header that will be used in all subsequent calls.
jsonRequestHeader = {
'accept': 'application/json' ,
'X-ClientID': '' + clientID + '',
'Content-Type': 'application/json',
'authorization': 'Bearer ' + authToken + ''
}
Search Definition
searchString=getSearchString()
#***ADD CUSTOM FORMATTING HERE***
We have a simple function to take entry from the user as our search criteria. By default, this matches on the file name except for Realtime Layouts where it will match on file description. If you wish to search for something like metadata in files, you can add additional formatting such as this:
searchstring=r'{meta:StringMeta = "xxx"}'
Data Collection and Formatting
This section is split into two sections: one for most file types, and one for Realtime Layouts. This is because the /api/v1/client/file-system/search-file-infos endpoint cannot find Realtime Layouts. Beyond that they both follow the same basic pattern.
-
Identify files by
searchstring -
Identify
filepath -
Build a dictionary based on the
filepathof objects to migrate -
Use the dictionary to build an array of commands
Most File Types
fileInfoResponse=exportFileInfo(sourceUrlBase,searchString,jsonRequestHeader,debug,pageSize)
objectDict,folderDict,smartAssetArray=parseFileInfos(fileInfoResponse,sourceUrlBase,debug)
commandArray=buildCommandArray(sourceUrlBase,folderDict,objectDict,debug)
To collect files which qualify for the search, a call is made to the /api/v1/client/file-system/search-file-infos endpoint:
url = urlBase + '/api/v1/client/file-system/search-file-infos'
jsonRequestData = {'searchString':searchString,
"pageSize":pageSize}
Response = requests.post(url , headers = jsonRequestHeader , json = jsonRequestData)
The full response from the above call is passed into the parseFileInfos function. This function takes the following steps:
-
Loop through the
fileInfoResponse["results"]array -
From each
result=fileInfoResponse["results"][i]extractresult["id"], the file GUID -
Make a get call to
/api/v1/client/file-system/file-info?message.id={id} -
From the response to step #3, extract
fullPath -
Create a dictionary
folderDict["\path\to\file\folder"]="id1;id2;id3" -
Create a dictionary
objectDict["id"]=datawhere data is defined below:{ "id":result["id"], "path":path, "name":result["name"] } -
Collect an array
smartAssetArray, which consists of all thesmartAssetsbeing migrated so they can be published in the target environment -
We then return
smartAssetArray,objectDict, andfolderDictto be used by later functions
The objectDict and folderDict from #8 are passed intocommandArray.commandArray will build a list of commands using the File Copy Utility to migrate the files identified above. For each unique folderpath we identified in the previous step we do the following:
-
Create a
fileNameArrayandfileNameDict -
Get the
folderIdusing this call:folderPathUrl=urlBase+"/api/v1/client/file-system/folder/by-fullpath?message.fullPath="+folderPath.replace('\\\\',"\\") Response=requests.get(folderPathUrl,headers=jsonRequestHeader) -
Then in the function
parseFolderContentswe use thefolderIdto extract the folder’s full contentsfolderContentUrl=urlBase+f"/api/v1/client/file-system/folders/content?message.id={folderId}" Response=requests.get(folderContentUrl,headers=jsonRequestHeader) -
We search through the contents in
fileStorageItemsfrom the Response, skipping over files with atypeNameofFolder -
If the
file["name"]is not in thefileNameArray, add it -
Append the
file["id"]to an array of Ids infileNameDict[file["name"]]-
fileNameDictwill have this structure{ "filename1":["id1","id2","id3"], "filename2":["id4","id5","idt"] }
-
-
Repeat 1-6 for each file in the folder
-
casefoldsortfileNameArray -
Then generate the directory index of each file which has been identified for the folder in which it is located
-
Based on the Directory Index create a
commandfor the File Copy Utility:-
command=f" \"cd \{folderPath}\" \"dir\" \"copy {directoryIndex}\" \"exit\""
-
-
Return an array of
command
Realtime Layouts
layoutCommands,layoutArray=buildCommandsfromPath('Configuration Collections\Configuration Collections',sourceUrlBase,{},debug=debug,searchString=searchString)
commandArray+=layoutCommands
The difference for Realtime Layouts is it jumps right to step #1 of the last section:
-
Create a
fileNameArrayandfileNameDict
This is because Realtime Layouts are always in the same folder. We also pass in the searchStringhere and filter out files based on the searchStringmatching the Realtime Layout’s file description. layoutArray would be used to publish Realtime Layouts if there was an API to do so.
Migration
processCommands(commandArray,fileUtilityPath,debug)
Migration uses the File Copy Utility path, and combines it with the arguments provided by commandArray to spawn processes that migrate the record.
-
Concatenate
fileUtilityPath+commandArray[i] -
Spawn process based on command in #1
-
Use
pywinautoto identify the process from #2 and then push the “o” key-
This step is only necessary if the file already exists in both environments. We cannot know this before the utility is already running, so we send an “o” key press in each instance.
-
There are some delays put in place here to make sure the process stops before a new one is spawned.
-
Publishing Smart Assets in Target Environment
targetAuthToken=getAuthToken(sourceClientSecret,authUser,authPass,targetUrlBase,debug)
targetRequestHeader=getAuthedRequestHeader(targetClientID,targetAuthToken,debug)
publishSmartAssets(smartAssetArray,targetUrlBase,debug)
The first two steps are the same as the ones from the Authentication section above, but with the target environment info instead of source.
The third step is to use the /api/v1/client/jobs/start/publish-smart-asset endpoint to publish the list of smartAssets from smartAssetArray.
Migration.py