CloudCherry is now part of Cisco.
Learn More About Cisco

Deployment Guide For Cisco Webex Experience Management Invitation Module

1. Introduction

In an economy where customer experience trumps price and product, the Cisco Webex Experience Management (referred to as Experience Management here on in the document) platform helps organizations exceed customer expectations and deliver business outcomes through its three pillars of customer experience:

One of the key steps in measuring customer experience is to reach out to customers over various channels such as email, SMS, web intercept, etc to solicit feedback. Amongst all the survey distribution channels, email and SMS are 2 of the popular ones. Global delivery invitation management enables a personalized experience for receiving survey invitations across various channels on the journey using workflows that can be configured and reused across chosen transmission channels like SMS and Email while requiring no PII information in the Experience Management platform.

Invitations module has 2 parts.

  1. Experience Management hosted module
  2. Partner hosted module

For more details on the architecture, please refer the “Invitations Module Architecture Document” document.

In this document, we will concentrate on the Partner hosted module of the Invitations feature. This involves configuring webserver, downloading code from GIT repo, generating binaries, configuration and deployment of these binaries. This module will be hosted outside of Experience Management Product infrastructure and will be completely managed by partners.

By the end of this document, you would know the working URL of hosted APIs for further configurations.

2. Scope

This document outlines the deployment process for the components of the Invitations feature:

  1. Dispatch Request – hosts Dispatch request, EventLog and Account Configurations APIs. This involves ASP.Net Core Web API Deployment

  2. Account Configurations Front-End – Configures vendor details for the Dispatches. Uses the Configurations APIs as mentioned in #1 above

  3. Dispatcher – Dispatches invitations to the vendors for their respective deliveries to the end customers/recipients

  4. Notification - Sends out real-time and EOD notifications based on DIWEF logs

3. Target Audience

4. Prerequisites

  1. Must have gone through the “Invitations Module Architecture Document” to have an understanding of the overall architecture and the various components involved in the module you are trying to set up
  2. Must have gone through the “Infra Provisioning Guide” and provisioned all necessary services
  3. Access to an Ubuntu 18.04 server with a standard user account with sudo privilege
  4. Must have a domain pointing to provisioned Ubuntu server
  5. Must have a signed SSL cert and key bought from valid authority for all servers
  6. Must have Mongo DB connection string
  7. Access with required permissions to Azure or AWS portal to deploy Azure functions or AWS lambda respectively
  8. The deployment steps should be tried from Windows computer only. If you are using any other OS than Windows, you should use the alternatives for SSH and SFTP tool.

5. Dispatch Request - ASP.Net Core Web API deployment

The Web API to be deployed is developed on ASP.Net Core 3.1 framework. The target servers are Linux servers only and currently focused on Ubuntu 18.04 only. All the steps in subsequent sections are provided keeping Ubuntu 18.04 servers in mind.

5.1 Ubuntu Server installation guide

5.1.1 Login to Ubuntu Server

By this step, you should be able to login to the Ubuntu server. Next we will be installing .NET Core SDK and runtime.

5.1.2 Install the .NET Core SDK & runtime on the server

By this step, you should be able to install .NET Core and run time successfully. Next section is just information on Kestrel server. No action is required.

5.1.3 Kestrel with a reverse proxy server

By this step, how Kestrel to be used with a reverse proxy server should be understood. Next, we will install the Nginx Webserver.

5.1.4 Install Nginx

sudo apt-get update
sudo apt install nginx-full

By this step, you should be able to install Nginx successfully and if firewall permits, should be able to access default landing page for Nginx.

5.1.5 Set-up monitoring for ASP.NET Core app

The server is setup to forward requests made to http://<serveraddress>:80 on to the ASP.NET Core app running on Kestrel at http://127.0.0.1:5000. However, Nginx isn’t set up to manage the Kestrel process. ystem can be used to create a service file to start and monitor the underlying web app. ystem is an init system that provides many powerful features for starting, stopping, and managing processes.

At the end of this step, you should have monitoring set-up for Invitations WebAPI successfully and this also completes the basic server configuration. Remaining tasks are to publish and deploy the build into server and confiugre Nginx.

5.2 Generate executable of ASP.Net core Web API for Ubuntu Server

At this step, we will be downloading the Invitations project file and compile the code using Visual Studio 2019 to make sure all the necessary libraries are available and compiling without any issues. The required tools at this step are: Visual Studio 2019(Version 16.4.6+) and Git bash.

Following steps are provided for Visual Studio installed on Windows. If trying on other OS than Windows, then use these steps as reference to publish the code. Screenshots may not match exactly for Visual Studio running on other OS.

5.2.1 Download the code from the public repo of Invitations Project

By this step, you must have downloaded the Invitations Feature code base and configured the appsettings.json file for XM.ID.Invitations.API solution. In next step, we will be publishing the binaries compatible for Ubuntu servers.

5.2.2 Configure Visual Studio to publish the ubuntu binaries

By this step, you should be able to generate the binaries successfully. Next step would cover the deployment for this binaries.

5.2.3 Deploy the published folder to Ubuntu Server

We will be using the same MobaXterm tool to transfer the published zip folder to Ubuntu Server.

By this step, the deployment of invitations build should be completed successfully. Next we will move to configuring the Nginx web server.

5.3 Configure Nginx Web Server

At this step, the Nginx server is already installed. You can verify the running process using this command.

ps ax | grep nginx

delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step25.png


5.3.1 Configure Nginx

At this step, you would require domain URL, SSL certs to be configured within Nginx.

By this step, Nginx must be running without any issues and linked to deployed invitation binary.

5.4 Final Output of Web API Deployment

By the end of this step,the deployment of Dispatch Request API is completed. Next we will move next to ACM Module deployment.

6. Account Configuration Module(ACM) Deployment

6.1 Deployment Configuration

7. Notification Module Deployment

Notification module can either be deployed in same server or in a separate linux server. Our recommendation is to deploy it on separate linux server always. The following instructions covers the deployment on the new server.

7.1 Steps to deploy Notification

Steps to deploy Notification module is similar to WebAPI deployment mentioned in Section 5. So reference is provided wherever required.

The provided link here is to set-up monitoring for Dispatch API build(invitations-bin), so we will run the commands modified for Notifications build

  1. Executable of .Net Core Web API app will be stored and managed in /var/www/notifications-bin working directory.

  2. Create a directory as “dummy_build” under /var/www (only for the first time, so that we can create soft link on this folder. From next deployment, instead of dummy_build folder, it will be actual webapp executable folder)

  3. All related commands and screenshots are mentioned below:

      
    		cd /var/www/
    		sudo mkdir dummy_notifications
    		sudo chown ubuntu:ubuntu dummy_notifications/
    		sudo chown ubuntu:ubuntu dummy_notifications/*
    		sudo ln -s dummy_notifications notifications-bin
    		sudo chown ubuntu:ubuntu notifications-bin
    		ls -lrt

  4. Create “logs” folder under “/var/www” to be used to store log file which will go as a link in email EOD by notification module

      
    		cd /var/www/
    		sudo mkdir logs
    		sudo chown ubuntu:ubuntu logs/
    		sudo chown ubuntu:ubuntu logs/*
    		ls -lrt

  5. Create the service definition file:

      
    		sudo nano /etc/systemd/system/kestrel-notifications.service

  6. Copy the following content into the kestrel-notifications.service

    		[Unit]
    		Description=Notifications build (.NET Core) running on ubuntu 18.04
    		[Service]
    		WorkingDirectory=/var/www/notifications-bin
    		ExecStart=/usr/bin/dotnet /var/www/notifications-bin/XM.ID.Invitations.Notifications.dll
    		Restart=always
    		# Restart service after 2 seconds if the dotnet service crashes:
    		RestartSec=2
    		KillSignal=SIGINT
    		SyslogIdentifier=dotnet-notifications
    		User=ubuntu
    		Environment=ASPNETCORE_ENVIRONMENT=Development
    		Environment=DOTNET_PRINT_TELEMETRY_MESSAGE=false
    		[Install]
    		WantedBy=multi-user.target

  7. Enable the service

    		sudo systemctl enable kestrel-notifications.service

8. Dispatcher - ASP.NET Core based Serverless Compute

8.1 Steps to deploy the two Azure Functions

  1. Launch Visual Studio and Sign-In with the subscription under which the Azure Functions have been provisioned.
  2. Launch AzureQueueTrigger.sln from the AzureQueueTrigger directory that is part of the provided source-code.
  3. Right Click + Publish the solution. This will open the following screen where you need to hit New to create a new Publish Profile. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step29.png
  4. As we have already provisioned the required function apps, we will be publishing the code to an Existing slot delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step30.png
  5. From the next screen choose one of the already provisioned Azure Functions and hit OK. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step31.png
  6. Now we will Edit Azure App Service Settings where we will provide details about the Storage Account, Queue Storage and MongoDB. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step32.png
  7. Fill all the required information into the remote settings and then click OK:
    • MongoDBConnectionString
    • DatabaseName
    • QueueName
    • AzureWebJobStorage: Azure Storage Account’s Access Key
    • FUNCTIONS_WORKER_RUNTIME: dotnet
    • FUNCTION_EXTENSION_VERSION: ~3 delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step33.png
  8. Finally hit Publish. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step34.png
  9. Similarly launch AzureTimeTrigger.sln from the AzureTimeTrigger directory. Right Click + Publish the code into the second provisioned function slot while following the same steps as above. For this you will again have to provide the following details, as was done in step number 7:
    • MongoDBConnectionString
    • DatabaseName
    • AzureWebJobStorage
    • FUNCTIONS_WORKER_RUNTIME: dotnet
    • FUNCTION_EXTENSION_VERSION: ~3

8.2 Steps to deploy the two AWS Lambdas

  1. Configure your SQS primary queue (not the dead letter queue) with the following details. Make sure you are providing the correct dead letter queue name. Both these queues were created at this step - delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step35.png
  2. To do so, Login into your AWS Management Console and then search for SQS. Select you primary queue and from Queue Actions select Configure Queue. After entering the correct details hit Save Changes delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step36.png
  3. Next you need to install the AWS ToolKit for Visual Studio.
  4. Now we need to create a AWS Named Profile. To do so launch your Visual Studio and click on AWS Explorer from the View menu. Click on New profile and fill in the required details. Make sure you do this in the same region where all the resources were provisioned before. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step37.png
  5. Launch AwsQueueTrigger.sln from the AwsQueueTrigger directory that is part of the provided source-code.
  6. Right Click + Publish to AWS Lambda the solution. In the pop-up screen select your newly created Named Profile and select Region, where you would have provisioned all your resources uptil now. Next choose Queue-Trigger from the drop-down list of available Funcion names. You would have created two AWS Lamdas in this step. Then hit Next. On the next screen make sure you choose the existing QueueTriggerRole IAM Role that was created for the Queue-Trigger Function as the Role Name and also configure the dead letter queue that was created in this step as your DLQ Resource. Next configure the Memory, the Timeout, and the required Environment Variables. Finally hit Upload. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step38.png delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step39.png
  7. Similarly launch AwsTimeTrigger.sln from the AwsTimeTrigger directory and publish the solution to the second provisioned AWS Lambda. Do this while using the same Named Profile, Region, Memory, Timeout, Environment Varaibles values as before, however select the already created Time-Trigger as its function name and TimeTriggerRole as its Role Name. However for this Lambda you don’t need to setup a DLQ Resource.
  8. Next we need to set up Event-Triggers for the 2 deployed Lambdas. To this go to your AWS Management Console and search for Lambda. From the next screen click on your Queue-Trigger Function delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step40.png
  9. Click on Add Trigger delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step41.png
  10. Search and select SQS as the desired trigger. Now select your primary SQS standard queue under the SQS Queue section, so as to configure it as the lambda’s event source. Next, set the batch size as 10, with the trigger enabled option, and finally hit the Add Button. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step42.png
  11. Now for the Time-Trigger function we need to create a CloudWatch Event. To do this search and select CloudWatch in the AWS Managemt Console. From the CloudWatch console, select Rules and then hit the Create Rule button. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step43.png
    delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step44.png
  12. Next select Schedule as the Event-Source type, configure 0/5 * * * ? * as the CRON Expression, which is every 5th minute of each hour, and setup your Time-Trigger Lambda as the schedule’s target resource. After this, click the Configure details button. Now give an appropiate name Name and finally hit Create Rule. And with this your deployment is complete. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step45.png
    delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step46.png

9. Initiator - ASP.NET Core based Serverless Compute

9.1 Steps to deploy the required Azure Function

  1. First we need to correctly configure the S3 Bucket that was created at this step. This involves creating the required Folders inside the bucket as per details that have been explained here and subsequently uploading one config.json into each of the folders created. To do this navigate to your created bucket and click + Create Folder. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step47.png
  2. Next give a suitable name to your Folder and choose None as the folder’s encryption option. Finally hit Save. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step48.png
  3. Now navigate to the newly created folder and hit + Upload to upload the required config.json. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step49.png
  4. From the next screen click on Add Files. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step50.png
  5. Finally hit the Upload button at bottom right to upload the config.json. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step51.png
  6. To upload Target Files in the future simply follow Steps 3-5.
  7. Next we need to configure S3 Events for the bucket. To do this navigate to the bucket and click on Properties delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step52.png
  8. Scroll down and click on Events. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step53.png
  9. This will pop-up a new menu. In this menu click on Add Notification. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step54.png
  10. Create a csvS3Event for .csv target files with the mentioned settings. Make sure to choose SNS Topic for Send to along with the SNS-Topic-Name that was created in this step. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step58.png
  11. Similarly create a xlsxS3Event for .xlsx files with the Suffix set to .xlsx.
  12. Now we need to deploy the source-code into the Azure Fuction App. To do this begin with launching AzureRequestInitiator.sln from the AzureRequestInitiator directory that is part of the provided source-code. Then follow Steps 3 to 8 of this section while using the Initiator slot that was created at this step and making sure of providing the following as the Azure App Service Settings.
    • MongoDBConnectionString
    • DatabaseName
    • AzureWebJobStorage
    • Region (Provide Regional Code of the configured S3 Bucket)
    • AwsAccessKeyId
    • AwsSecretAccessKey
    • FUNCTIONS_WORKER_RUNTIME: dotnet
    • FUNCTION_EXTENSION_VERSION: ~3
  13. Next we need make the Azure Function App a subscriber of the configured SNS Topic. To do this go to the Initiator Function App on the Azure Portal. On select Fucntions. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step59.png
  14. From the next screen elect S3EventTrigger. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step60.png
  15. Now head over to Overview and click on Get Function Url. Next click on the copy button to copy the value onto the clickboard. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step61.png
  16. Next head over to the configured SNS Topic, which was created at this step, and hit Create Subscription. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step62.png
  17. On the next screen select HTTPS as the desired protocol and provide the URL from Step 15 as the Endpoint (append “https://” to the Url obtained in Step 15). Once done hit Create Subscription. You should receive a success message on the next screen. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step63.png
  18. This completes the setup and deployment of the Initiator Component.

9.2 Steps to deploy the required AWS Lambda

  1. First we need to correctly configure the S3 Bucket that was created at this step. This involves creating the required Folders inside the bucket as per details that have been explained here and subsequently uploading one config.json into each of the folders created. To do this navigate to your created bucket and click + Create Folder. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step47.png
  2. Next give a suitable name to your Folder and choose None as the folder’s encryption option. Finally hit Save. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step48.png
  3. Now navigate to the newly created folder and hit + Upload to upload the required config.json. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step49.png
  4. From the next screen click on Add Files. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step50.png
  5. Finally hit the Upload button at bottom right to upload the config.json. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step51.png
  6. To upload Target Files in the future simply follow Steps 3-5.
  7. Next we need to configure S3 Events for the bucket. To do this navigate to the bucket and click on Properties delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step52.png
  8. Scroll down and click on Events. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step53.png
  9. This will pop-up a new menu. In this menu click on Add Notification. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step54.png
  10. Create a csvS3Event for .csv target files with the mentioned settings. Make sure to choose Lambda Function for Send to along with the AWS Lambda that was created in this step. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step55.png
  11. Similarly create a xlsxS3Event for .xlsx files with the Suffix set to .xlsx.
  12. Now we need to deploy the source-code into the AWS Lambda. To do this begin with launching AwsRequestInitiator.sln from the AwsRequestInitiator directory that is part of the provided source-code.
  13. Right Click + Publish to AWS Lambda the solution. In the pop-up screen, select the previously created Named Profile. This was created at Step 4 of this. Moving forward, select the Region, where you would have provisioned all your resources uptil now and subsequently choose Initiator from the drop-down list of all available functions. This function was created here. Once done hit Next. On the next screen make sure you choose the existing InitiatorRole IAM Role that was created for the Initiator Function. Finally, configure the Memory, the Timeout, and the required Environment Variables and hit Upload. delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step56.png delivery-Policy-screen-shot/deployment-invitation-guide/deployment-invitation-guide-step57.png
  14. This completes the setup and deployment of the Initiator Component.

10. Reporting Module Deployment

Steps to deploy reporting modules will be exactly same as Notifications module deployment as mentioned in section 7. There are two sub-modules deployment(DPReporting and DataMerger) as part of Reporting module and both follows the similar steps. These two modules will be deployed in same server where Notification module is also running.

Following steps to be performed for deployment of each of sub modules of reporting module.

  1. Login to Ubuntu server where Reporting modules are to be deployed - this step
  2. Set-up monitoring for each of Rerporting modules .NET Core app - this step
  3. Download the code using the steps mentioned already here - this step

10.1 DataMerger Module - Configurations

delivery-Policy-screen-shot/Invitaion-report-images/data-merging.png


The DpReporting job in a way depends on the data merging job, so make sure you deploy the data merger first. Here we would be covering the “DataUploadSettings”.

10.2 DPReporting Module - Configurations

delivery-Policy-screen-shot/Invitaion-report-images/reporting-job.png


Here we will be covering the configurations parameters - “ScheduleReport”, “LogFilePath” and “DetailedLogs” in detail.

Next, we have the settings for the Detailed Logs report. The detailed logs report is a large file which cannot be sent through email. Hence, this is stored in App Server and provide the user with a URL which will give them the report. The app service deletes all detailed logs report which were generated more than 2 days ago.


  1. Publish these binaries one by one to Ubuntu server using this step

  2. Deploy the published folder to Ubuntu Server using this step

  3. Create a basic auth password file which will be used in Nginx configuration to authenticate a user before accessing logs

    Follow this link to create a password file using apache2-utils tools.

    Please note the credentials created here should be shared with CCE/X admin so that they can access the reports link received through email. Also, use the same credentials created for Notification logs in Reporting files as well.

  4. Configure Nginx to make the “logs” available to be accessed over a URL. Follow this step

    Except nginx.conf, rest all details remains same. Below is the nginx.conf snippet which should be used addtionally for reporting module.

        cd /etc/nginx
        sudo vim /etc/nginx/nginx.conf
    	
    	location /reports {
    		auth_basic "Restricted Access!";
            auth_basic_user_file    /etc/apache2/.htpasswd;
            root /var/www/;
            try_files $uri $uri/ /error_mess.html;
        }
    
        location /error_mess.html {
    		return 404 "The file you are trying to download is no longer available and this link has expired.";
        }
    
    	
  5. Restart Nginx to reflect the above changes.