How we helped a lottery, Win (Part 1)

Last yeargovernment lottery agency that conducts and manages gaming facilities, province-wide lottery games, internet gaming, bingo, and other electronic gaming products at Charitable Gaming Centers came to us with an interesting problem. They wanted the digital arm of our Consulting firm to build an online Player Exclusion system. It will be used for managing the ineligibility of the players for the agency’s digital business. People who have Voluntarily Self Excluded (VSE) and lottery employees are all ineligible to play. The agency manages its VSE in its enterprise application.  

The Player Exclusion system was envisioned to be a collection of Javabased microservices and a frontend web application that needs to generate massive reports interfacing with enterprise systems hosted in on-premise data centers. Due to its complex nature and unpredictable scalabilityCloud was the first and only choice to house the system.   

 

Now the interesting part … 

How do we develop a cloudnative application while simultaneously onboarding and training a Government agency that has exclusively worked only with on-premise systems? 

Our Approach: 
  1. Use Managed/PAAS services, where available 
  2. AAutomation First approach 
  3. Use Azure Products, if available  

 

Now, the Design Part … 

 

Networking: 

Cloud by its very nature is endless. The network design must be future proof to accommodate additional applications and adequate redundancy options required. We decided on a Hub and Spoke Network topology. The hub is a virtual network (VNet) in Azure that acts as a central point of connectivity to the on-premises network or other spoke networks. The spokes are VNets that peer with the hub and can be used to isolate workloads. The Hub is connected to the on-prem enterprise systems through an ExpressRoute. The Spoke houses the microservices, databases, monitoring, and API management services. As the service expands in the future, more spokes can be added or replaced without disrupting the existing architecture. 

Storage: 

The solution required a multitude of encrypted storage options for various stages of data processing which was incorporated using various Azure products such as Storage accounts (temporary application data), Azure SQL Server (customer data), Azure Service Bus(queuing) and Azure Key Vault (authorization). Storage gets replicated asynchronously across regions to support the application during a disaster.

 

Web Application: 

The Player Exclusion system will be built on a Java spring boot platform and Linux OS which will run on Docker container instances. Due to compliance requirements, we decided to host them on a fully isolated and dedicated App Service Environment (ASE) for securely running App Service apps at scale. The App service will be fronted with an Azure cloud-based service WAF solution called Azure Application Gateway (App Gateway). App gateway will filter the traffic and forward it to the Azure Load Balancer. App Gateway will be analyzing the traffic in real-time to guard against threats such as DDoS, SQL injection, buffer overflows, file inclusion, cross-site scripting, among others. 

 

Monitoring & Logging 

We will use a combination of monitoring and logging products to serve different purposes. Azure Monitor will be the primary monitoring solution and will funnel all the applications and infrastructure logging (including App Service Logs). They will be retained in Azure as per compliance requirements. The logs will also be streamed to Splunk for additional analysis. 

 

Disaster Recovery: 

As part of a business continuity and disaster recovery (BCDR) implementation, a secondary site for hosting all Azure workloads will be replicated from the Primary Site. Having the Secondary site in a paired region ensures that the Azure system updates and software updates are not deployed at the same time in both regions. An Azure Traffic Manager will be utilized to route traffic between the paired regions. 

Azure Traffic Manager will be used for providing Geo-failover between the paired Azure Canada East and Azure Canada Central regions; It allows control and the distribution of user traffic to user-defined service endpoints running in different data centers around the world. The Traffic Manager works by using the Domain Name System (DNS) to direct end-user requests to the most appropriate endpoint, based on the configured traffic-routing method and current view of endpoint health. 

 

Once the architecture jigsaw was pieced together (as below), we set out to build the application and the infrastructure simultaneously. 

 

Part 2 of this article will be published next week. In part 2, I will cover how we built/automated the infrastructure and the application deployment.

Leave a Reply

avatar
  Subscribe  
Notify of