IT Security 002: MAS TRMG Appendix

Continuing from the previous post, this is a summary of Appendix A-F of MAS Technology Risk Management Guidelines (TRMG).

The appendix provides more details for some of the specifications mentioned in the main body of the guidelines, and may tend to be a little bit more technical.

Appendix A: System Security Testing and Source Code Review

  1. Security testing alone is ineffective in detecting all threats and weaknesses. FIs should also include system source code review in its System Development Life Cycle (SDLC).
  2. FIs should take note of the following during system testing and source code review:
    • Information Leakage – scrutinise the potential sources of sensitive information leakages through verbose error messages, hard-coded data, files and directories operations.
    • Resiliency Against Input Manipulation – Lack of proper input validation can spawn major vulnerabilities such as script injection and buffer overflows. Validation routines should be reviewed and tested to assess effectiveness. Validation should include:
      • validate all inputs to an application.
      • validate all forms of data input format.
      • verify the handling of null or incorrect inputs.
      • verify content formatting.
      • validate maximum length of input.
    • Unsafe Programming Practices – review the source code to identify unsafe practices:
      • vulnerable function calls.
      • poor memory management.
      • unchecked argument passing.
      • inadequate logging and comments.
      • use of relative paths.
      • logging of authentication credentials.
      • assigning inappropriate access privilege.
    • Deviation From Design Specifications – test critical modules (such as authentication functions and session management) to ensure no deviation. Include:
      • verify security requirements (credential expiry, revocation, reuse) and protection of cryptographic keys for authentication.
      • verify sensitive information stored in cookies are encrypted.
      • verify session identifier is random and unique.
      • verify session expires after a pre-defined length of time.
    • Cryptographic Functions – strength of cryptography depends on algorithm, key size, and implementation. Consider:
      • implement cryptographic modules based on authoritative standards and reputable protocol.
      • review algorithms and key configurations for deficiencies and loopholes.
      • assess the choice of ciphers, key sizes, key exchange protocols, hashing functions, RNG.
      • testing all cryptographic operations and key management procedures.
      • (refer to Appendix C).
    • Exception Handling – ensure robust exception and error handling that facilitates fail-safe processing, assist problem diagnosis through logging, and prevent leakage of sensitive information.
    • Business Logic – ensure that business logic are tested and deny unauthorised function or transaction. Consider the use of negative testing.
    • Authorisation – perform tests to ensure actual access rights granted conform to the approved security access matrix.
    • Logging – ensure the following when implementing logging functions:
      • sensitive information should not be logged.
      • maximum data length for logging is pre-determined.
      • logs both successful and unsuccessful authentication.
      • logs both successful and unsuccessful authorisation.

Appendix B: Storage System Resiliency

  1. Overview
    1. Resiliency and availability of storage systems are crucial to continuous operation of critical applications.
  2. Reliability and Resiliency
    1. FIs should review storage system architecture and connectivity regularly (both centralised and distributed storage). Prevent single points of failure and fragile functional design, ensure technical support.
    2. Poorly designed SANs concentrate risks to system infrastructure. FIs should ensure redundancy of all SAN components (multiple links and switches for all I/O operations between hosts, adapters, storage processors and storage arrays), and a HA, resilient, and flexible architecture.
    3. FIs should establish sound patch management process for timely update of storage systems, and rigorous change management process for deploying of configuration changes and upgrades.
    4. FIs should establish in-house alert and monitoring capability for early detection of storage systems outages. Consider data replication mechanisms and vendor call-home capability for enhanced resiliency. Should also maintain oversight of diagnostics and remediation activities.
  3. Recoverability
    1. FIs should ensure architecture of storage system is able to switch from primary production to alternate site to meet expected RTO and RPO. Should regularly test the recoverability and data consistency at alternate site.

Appendix C: Cryptography

  1. Principles of Cryptography
    1. Primary purpose is to protect integrity and privacy of sensitive information.
    2. Secrecy of the key is important, not the secrecy of algorithm. Ensure protection and secrecy of all keys used (master keys, key encrypting keys, data encrypting keys).
  2. Cryptographic Algorithm and Protocol
    1. Cipher algorithms may need to be enhanced or replaced in the face of ever improving computer hardware and techniques enabling the attacks on cryptography.
    2. FIs should review algorithms and key configurations for deficiencies and loopholes, and assess the choice of ciphers, key sizes, key exchange protocols, hashing functions, RNG.
    3. FIs should ensure RNG has sufficient size and randomness of seed number to preclude the possibility of optimised brute force attack.
  3. Cryptographic Key Management
    1. FIs should establish key management policy and procedures that covers generation > distribution > installation > renewal > revocation > expiry.
    2. FIs should ensure the keys are securely generated, such that constituents are destroyed or no single person has access to the entire key or all constituents. Ensure that keys are created > stored > distributed > changed under stringent conditions.
    3. FIs should ensure unencrypted symmetric keys are entered into tamper-resistant devices (e.g. HSM) using principles of dual control. Keys should only be used for single purpose to reduce exposure.
    4. FIs should decide the appropriate effective timeframe (cryptoperiod) of keys, using sensitivity of data and operational criticality to determine frequency of key changes.
    5. FIs should ensure HSM and keying materials are physically and logically protected.
    6. FIs should ensure keys are not exposed during usage or transmission.
    7. FIs should use secure key destruction method on expired key to prevent recovery by any parties.
    8. New keys should be generated independently from the previous keys.
    9. FIs should maintain a backup of keys, with same level of protection accorded to the original keys.
    10. FIs should immediately revoke > destroy > replace any compromised keys, as well as all derived keys or encrypted keys affected. Inform all parties concerned of the revocation.

Appendix D: Distributed Denial-Of-Service Protection

  1. Overview
    1. Proliferation of botnets and new attack vectors have increased the potency of DDOS attacks.
    2. Evolving threat landscape allows more sophisticated DDOS attack on other layers of OSI with minimal bandwidth.
    3. DDOS attack would cripple the network and system of even large commercial organisations, causing massive service disruption or cessation.
    4. In spite of malware protection, FIs should still bolster the system robustness against DDOS attacks.
  2. Detecting and Responding to DDOS Attacks
    1. FIs should deploy appropriate tools to detect, monitor, analyse anomalies in networks and systems (unusual traffic, volatile system performance, sudden surge in utilisation) and have anti-DDOS equipment to respond to DDOS attacks.
    2. On top of network perimeter security devices that alert FIs of suspected attacks, consider using purpose-built high performance appliances to handle DDOS so that legitimate traffic is still allowed as malicious packets are filtered.
    3. Elimination of single source of failure vulnerable to DDOS attacks should be eliminated through source code review, network design analysis, and configuration testing.
  3. Selection of Internet Service Providers
    1. Effective countermeasure to DDOS often rely on ISPs to dampen attacks in upstream network.
    2. FIs should incorporate DDOS attack considerations when selecting ISP and determine:
      • whether ISP offers DDOS protection or clean pipe services.
      • the ability of ISP to scale up network bandwidth on demand.
      • the adequacy of ISP’s incident response plan.
      • capability and readiness of ISP to respond quickly to attacks.
  4. Incident Response Planning
    1. FIs should devise incident response framework and routinely validate it to facilitate fast response to DDOS attacks. Include:
      • detailed immediate steps to counter an attack.
      • invoke escalation procedures.
      • activate service continuity arrangements.
      • trigger customer alerts.
      • reporting the attack to MAS.
    2. FIs should assimilate ISP incident response plans into their own, establish a communication protocol with the ISP and conduct periodic joint incident response exercises.

Appendix E: Security Measures for Online Systems

  1. Overview
    1. MITM attack = an interloper accessing and modifying communications between two parties without revealing that the link has been compromised.
    2. There are many possible MITM attacks (on computing devices, internal networks, information service providers, web servers, anywhere along the path between user and FI’s server).
  2. Security Measures
    1. FIs should implement adequate controls and measures to prevent MITM as part of 2FA infrastructure.
    2. For high-risk transactions, consider:
      • use digital signatures and key-based message authentication codes (KMAC) to prevent MITM.
      • ensure customer is able to distinguish generation of OTP from hardware token and the process of signing a transaction.
      • use different cryptographic keys for generating OTP and for signing.
    3. FIs may choose to implement challenge-based or time-based OTP. Time-based OTP validity window should be configured on server side, and be as short as practicable to lower risks.
    4. Customers should be notified through a second channel of high-risk transactions, with meaningful information of the transaction. The notification should not be sent to the same device performing the transaction.
    5. FIs should implement end-to-end encryption security at application layer to protect customer PINs and password, on top of SSL.
    6. Online sessions should automatically terminate after a fixed period unless customer re-authenticate.
    7. FIs should educate customers to terminate login session when facing wrong SSL server certificate warning, and notify the FI immediately.

Appendix F: Customer Protection and Education

  1. Overview
    1. FIs should protect customers’ accounts and data, and raise customers’ security awareness with regard to online financial services.
  2. Customer Protection
    1. FIs should not distribute software to customers via the internet unless there are adequate security and safeguards. There should be appropriate alert and assistance for the customer to verify the origin and integrity of those downloads.
    2. Observe the following controls when handling customers’ login credentials:
      • implement dual control and segregation of duties in password generation, dissemination and account activation.
      • print password mailer in secure location where access is restricted and monitored.
      • destroy mailer spoilages immediately and generate a new password for each reprint.
      • destroy all stationary that may contain password imprint during mailer printing.
      • ensure passwords are not exposed or compromised during dissemination process.
      • ensure passwords are not processed, transmitted, stored in clear text.
      • require customers to change passwords immediately upon first login.
      • only distribute hardware token that has been assigned to a customer account.
    3. FIs should inform customers about risks and benefits, terms and conditions, rights, obligations and responsibilities of all parties (in particular regarding processing errors and security breaches) before customer subscribe to the service, in an easy to understand format.
    4. FIs should make the terms and conditions readily available to the customers and require a positive acknowledgement on initial logon or subscription.
    5. FIs should post these disclosures on its website:
      • customer privacy and security policy
      • customer dispute handling, reporting, and resolution procedures, including expected response time. Explain the process to resolve problems or disputes,  and circumstances which losses would be attributable to FI or the customers if security breaches occur.
      • security measures and reasonable precautions customers should take when accessing their online accounts (prevent unauthorised transactions, fraud, stealing of credentials, impersonation).
    6. FIs should ensure that any interference to an authenticated session will result in session termination, and affected transactions are resolved or reversed out. Promptly notify the customer of such incident.
  3. Customer Education
    1. FIs should educate the customers on the security and reliability of their interaction with FIs. It will build customer confidence, and customer will understand the appropriate security measure they should take to safeguard their own devices.
    2. FIs should provide sufficient instruction and information to customers on new operating features and functions. Continual education and timely information will help customers in reporting security problems.
    3. FIs should remind customers on the need to protect their authentication information. FIs may display security instructions on login pages. Consider the following guidelines:
      • PIN should be at least 6 digits or alphanumeric characters.
      • PIN should not be based on guessable personal information.
      • PIN should be kept confidential.
      • PIN should be memorised and not recorded anywhere.
      • PIN should be changed regularly when there is any suspicion of compromise or impairment.
      • Same PIN should not be used for multiple applications.
      • Customer should not allow browser to store or retain usernames and passwords.
      • Customer should check authenticity of FI’s website by validating the URL and digital certificate information (SSL EV certifications).
      • Customers should check that secure HTTP and security icon appears in browser when authentication and encryption is expected on the website.
      • Customer should not allow anyone to tamper with their OTP token.
      • Customer should not reveal the OTP generated from their token.
      • Customer should not divulge the serial number of their OTP token.
      • Customer should check their account information, balance and transactions frequently and report any discrepancies.
      • Customer should inform FI immediately on the loss of their mobile phones, or changes in phone numbers.
    4. FIs should advise customers to adopt the following security precautions and practices:
      • install anti-virus, anti-spyware, and firewall software on their personal devices.
      • update OS and protection software regularly.
      • remove file and printer sharing in computers, especially when connected to internet.
      • regularly backup critical data.
      • consider encryption technology to protect highly sensitive information.
      • log off at the end of online sessions.
      • clear browser cache after online sessions.
      • do not install software or run programs of unknown origins.
      • delete junk or chain emails.
      • do not open email attachments from strangers.
      • do not disclose personal or financial information to little-known or suspicious website.
      • do not use a device that cannot be trusted.
      • do not use public internet or devices to access online services or perform financial transactions.
    5. FIs should educate customers on the features of payment cards and the associated risks, the security features and steps to report card loss or fraud cases.
    6. The above information are not intended to be static or exhaustive. FIs should provide updated security practices and guidelines to customers in a user-friendly manner.



IT Security 001: MAS Technology Risk Management Guidelines (TRMG)

The Monetary Authority of Singapore (MAS) had published a set of Technology Risk Management Guidelines (TRMG) to help financial institutions address technology risks. Instead of finding the TRMG a nuisance, I felt that the guidelines are fantastic as they provide a starting point for an IT department begin addressing the technology risks that may go unnoticed if the department do not already possess the skills and expertise to address these concerns. Even if you are not a financial institution, I guess the TRMG is still relevant for you to benchmark your own risk management capabilities.

I attempt to create my own TL;DR version of the TRMG to capture the key principles and make it easier to remember for myself. Please only use this as a cheat sheet, and thoroughly review the Guidelines on your own if you are providing consultation or advice for your company.

1. Introduction

  1. IT is important to financial institutions (FI) business strategies.
  2. IT systems of FIs become more complex.
  3. FIs are offering more variety of IT services, therefore FIs should fully understand and manage the technology risks.
  4. TRMG consists of management principles and best practice to guide FIs in:
    • Establish a sound and robust technology risk management (TRM) framework.
    • Strengthening system security, reliability, resiliency, and recoverability.
    • Protect customer data, transactions and systems.
  5. TRMG is not legally binding, but MAS strongly encourage FIs to consider.

2. Applicability of the Guidelines

  1. FIs may adapt the TRMG where appropriate. TRMG should be applied in conjunction with relevant regulatory requirements and industry standards.
  2. TRMG objective is to promote sound practices and processes for managing technology.

3. Oversight of Technology Risks by Board of Directors (Board) and Senior Management (SM)

  1. Critical IT system failures can lead to reputational damage, regulatory breaches, revenue and business losses.
  2. Board and SM should have oversight of technology risks and ensure IT is capable of supporting business.
  3. Roles and Responsibilities
    1. Board and SM should ensure TRM framework is established and maintained. They should be involved in key IT decisions.
    2. Board and SM should ensure that controls and practices achieve security, reliability, resiliency and recoverability.
    3. Board and SM should consider cost-benefit issues (reputation, consequential impact, legal implications) when investing in controls and security measures for IT (systems, networks, datacentres, operations, and backups)
  4. IT Policies, Standards and Procedures
    1. FIs should establish policies, standards and procedures to manage risks and safeguard information system assets (data, systems, network device and other IT equipment).
    2. Policies, standards and procedures should be reviewed and updated regularly.
    3. Compliance process should verify that standards and procedures are enforced. Deviations should be addressed on a timely basis by a follow-up process.
  5. People Selection Process
    1. Have a screening process to carefully select staff, vendors and contractors to minimise technology risks due to system failure, internal sabotage or fraud.
    2. Staff, vendors and contractors authorised to access systems should be required to protect sensitive or confidential information.
  6. IT Security Awareness
    1. Establish a comprehensive security awareness training program for every staff. To include:
      • IT Security policies and standards
      • Individual responsibility
      • Measures to safeguard information system assets
      • Applicable laws, regulations and guidelines pertaining to usage, deployment and access to IT resources.
    2. Training program conducted and updated at least annually. Applicable to new and existing staff, contractors and vendors, accessing IT resources.
    3. SM to endorse training program. Content to be reviewed and updated to be relevant to emerging and evolving technology risks.

4. Technology Risk Management Framework

  1. TRM framework manage risks in a systematic and consistent manner. It encompasses:
    • Roles and responsibilities in managing technology risks.
    • Identification and prioritisation of information system assets.
    • Identification and assessment of impact and likelihood of current and emerging threats, risks and vulnerabilities.
    • Implementation of appropriate practices and controls to mitigate risks.
    • Periodic update and monitoring of risk assessment to include changes in systems, environmental or operating conditions that would affect risk analysis.
  2. Risk management practices and internal controls should be instituted to be effective.
  3. Information System Assets
    1. Assets should be adequately protected from unauthorised access, misuse, fraudulent modification, suppression or disclosure.
    2. FIs should establish clear policy on assets protection. Identify criticality of assets to develop protection plans.
  4. Risk Identification
    1. Entails determination of threats and vulnerabilities in the IT environment:
      • internal and external network
      • hardware and software
      • applications and systems interfaces
      • operations and human elements
    2. Threat may take any forms as long as it can cause harm by exploiting system vulnerabilities. Humans are significant sources of threats.
    3. FIs should be vigilant in monitoring mutating and growing risks e.g. ransomware outbreaks.
  5. Risk Assessment
    1. Analyse and quantify the business and operations impact of risks identified.
    2. Extent of impact depends on likelihood of threat and vulnerabilities occurring and causing harm.
    3. FIs should develop a threat and vulnerability matrix to assess potential impact and prioritise risks.
  6. Risk Treatment
    1. FIs should implement risk mitigation and control strategies for each type of risk identified. Measures should be consistent with the value of information system assets and level of risk tolerance.
    2. Risk mitigation entails a methodical approach for evaluating > prioritising > implementing risk control, which includes a combination of:
      • technical control
      • procedural control
      • operational control
      • functional control
    3. FIs should prioritise to address highest ranking risks given time and resources constraints. FIs should also consider their risk tolerance for damage and losses, and the cost benefit analysis (CBA) of implementing risk controls.
    4. FIs should maintain their business stability (costs effectiveness concerns) while managing and controlling risks.
    5. FIs should avoid implementing IT systems with unmanageable risks.
    6. FIs should consider taking insurance cover if applicable.
  7. Risk Monitoring and Reporting
    1. FIs should institute a monitoring and review process for continuous assessment and treatment of risks. FIs should maintain a risk register to:
      • Prioritise risks based on severity
      • Monitor risks closely
      • Report regularly on the mitigation actions
    2. FIs should use IT risk metrics (consider risk events, regulations, audit observations) to highlight systems, processes or infrastructure with highest risk exposure. Provide an overall technology risk profile to board and SM.
    3. FIs should review, evaluate, and update risk controls as IT environment changes to maintain effectiveness.
    4. Review and update of risk controls should also consider changing circumstances and risk profile of the FI.

5. Management of IT Outsourcing Risks

  1. There are many forms of IT outsourcing. May be single or multiple vendors, local or abroad.
  2. Due Diligence
    1. Board and SM should fully understand the risk of IT outsourcing. Determine the following before appointing outsource vendor:
      • viability, capability
      • reliability, track record
      • financial position
    2. FIs should ensure contractual T&C are fully covers all roles, relationships, obligations and responsibilities. Usually includes:
      • performance targets, service levels
      • availability, reliability, scalability
      • compliance, audit, security
      • contingency planning, disaster recovery (DR) capabilities
      • backup processing facilities
    3. FIs should ensure outsource service provider (as part of the contractual agreement) grant access to the FI or nominated parties and regulatory authorities without any hindrance:
      • to systems, operations, facilities and documentations
      • to review for regulatory, audit or compliance purpose
      • to inspect, supervise and examine service provider’s roles, responsibilities, obligations, functions, systems and facilities.
    4. Outsourcing should never weaken FI’s internal controls. FI should require service provider to employ high standard of care and diligence in:
      • Security policies, procedures, and controls
      • Protection of confidential and sensitive information (customer data, files, records, object programs and source codes).
    5. FIs should require service provider to implement the above controls as stringent as itself would.
    6. FIs should monitor and review the above controls regularly, and commission or obtain periodic expert reports on security adequacy and compliance w.r.t. the operations and services provided by the service provider.
    7. FIs should require service provider to have DR contingency framework (defines roles and responsibilities for documenting, maintaining and testing DR plans).
    8. Everyone concerned (including outsourced partners) should receive regular training in executing DR.
    9. DR plan should be reviewed, updated and tested regularly, according to changing environment.
    10. FIs should have contingency plan for viable alternatives to resume operations if service provider experience critical failure in a credible worst case scenario.
  3. Cloud Computing
    1. Cloud computing is a service and delivery model which users may not know the exact locations of IT resources in the service provider’s computing infrastructure.
    2. The same principle of due diligence applies to cloud computing. Note these unique attributes and risks:
      • data integrity
      • data sovereignty
      • data commingling
      • platform multi-tenancy
      • recoverability
      • confidentiality
      • regulatory compliance
      • auditing
      • data offshoring
    3. Considering multi-tenancy and data commingling architecture, FIs should ensure service provider is capable of isolating and identifying customer data and information system assets for protection.
    4. FIs should have contractual power and means to promptly remove or destroy data stored with service provider on contract termination.
    5. FIs should verify the service provider’s ability to recover within the stipulated RTO before outsourcing.

6. Acquisition and Development of Information Systems

  1. Many systems fail due to poor design, implementation and testing. FIs should identify defects and deficiencies in initial project phase.
  2. FIs should establish steering committee (business owners, developers, stakeholders) to oversee the project.
  3. IT Project Management
    1. Project management framework should include
      • Roles and responsibilities.
      • risk assessment and classification
      • critical success factors
      • milestones and deliverables
    2. FIs should document project plans that set out clear deliverables at each milestones.
    3. FIs should ensure that the following are approved by IT and Business:
      • functional requirements, system design, technical specs
      • business cases and CBA
      • test plans
      • service performance expectation
    4. FIs should establish management oversight to ensure timely completion. Issues cannot be solved by project committee should be escalated to SM.
  4. Security Requirements and Testing
    1. FIs should perform compliance checks on security standards against statutory requirements. Also, FIs should specify security requirements in early phase related to:
      • system access control, authentication
      • transaction authorisation
      • data integrity
      • system activity logging, audit trail, security event tracking
      • exception handling
    2. System testing methodology should be established to cover the following in various stress-load and recovery conditions:
      • business logic
      • security controls
      • system performance
    3. FIs should ensure full regression testing before system changes are made. Affected users should sign-off test results (refer to Appendix A).
    4. FIs should conduct penetration testing (pen-test) for new systems with internet accessibility and open network interface. Also perform vulnerability scanning of external and internal network components connected to the system.
    5. FIs should maintain separate environment for unit, integration, and UAT, and closely monitor vendor and developers access to these.
  5. Source Code Review
    1. Program code may conceal threats and loopholes which cannot be effectively identified through testing.
    2. Source code review is a methodical examination to find:
      • coding errors, poor coding practices, malicious codes
      • security vulnerabilities and deficiencies
      • mistakes in system design or functionality
    3. FIs should ensure high degree of system and data integrity for all systems. Ensure appropriate security control that considers complexity of applications.
    4. FIs should perform a combination of testing, source code reviews, and compliance reviews according to risk analysis.
  6. End User Development
    1. There are simple self-service applications for end users to do their own developments.
    2. FIs should assess the importance of such applications.
    3. Minimum recovery measures, user access and data protection controls should be implemented.
    4. FIs should test end user developed programs to ensure integrity and reliability.

7. IT Service Management

  1. IT service management framework supports:
    • IT systems, services, operations
    • change and incident management
    • stability of production environment
  2. Framework should include governance structure and processes and procedures for:
    • change management
    • software release management
    • incident management
    • capacity management
  3. Change Management
    1. Establish process to ensure production systems changes are assessed > approved > implemented > reveiwed.
    2. Process should apply to:
      • system and security configuration changes
      • patches for hardware devices
      • software updates
    3. Risk and impact analysis should be performed before deploying changes. Consider affected:
      • infrastructure, network
      • upstream and downstream systems
      • security implications
      • software compatibility
    4. Changes should be tested before deploying to production. Test plans should be documented. Tests results should be sign-off by users.
    5. Changes to production environment should only be approved by personnel with delegated authority.
    6. FIs should backup the systems and have a rollback plan prior to change. Should also have alternative recovery options if rollback is not possible after change.
    7. FIs should ensure logs are recorded for changes made.
  4. Program Migration
    1. Migration involves moving codes and scripts from development to test or production environment. Risks of malicious code injections.
    2. Each environment should be physically or logically separated.
    3. If controls in non-production environment is less stringent than production, FIs should perform risk assessment to ensure sufficient preventive and detective controls before migrating.
    4. Segregation of duties should be enforced to ensure no single individual can alone develop, compile and move objects across environments.
    5. Successful changes in production should also be replicated in DR system for consistency.
  5. Incident Management
    1. IT incident should be managed to avoid mishandling or aggravating of situation that prolong service disruption.
    2. FIs should establish incident management framework to restore IT services as quickly as possible following an incident, with minimal impact to business. Should include:
      • Roles and responsibilities
      • Recording of incidents
      • Analysing of incidents
      • Remediating of incidents
      • Monitoring of incidents
    3. FIs may delegate to a centralised technical helpdesk for assessing and assigning severity levels to incidents. Criteria of severity levels should be established and documented.
    4. Escalation and resolution procedures, and resolution timeframes should be appropriate to respective severity level.
    5.  Escalation and response plan should be tested on a regular basis.
    6. FIs should have an emergency response team made up of internal staff, with the technical and operational skills to handle major incidents.
    7. SM should be kept informed of incident developments in order to timely activate DR in case an incident aggravate into a crisis. Procedures to notify MAS when critical systems failed over to DR should be established.
    8. FIs should have predetermined action plan to address public relations issues, to maintain customer confidence throughout a crisis.
    9. FIs should keep customers informed of any major incident and consider effectiveness of communication (includes informing the general public).
    10. FIs should perform root-cause and impact analysis for major incidents and take remediation actions to prevent recurrence.
    11. FIs should have incident report that includes:
      • executive summary of incident
      • root-cause analysis
      • impact analysis
      • measures to address consequences of incident and the root cause
    12. Analysis should cover:
      1. Root Cause Analysis
        • when, where, why, and how the incident happened.
        • How frequent the incident occurred over last 3 years.
        • Lessons learnt from incident.
      2. Impact Analysis
        • Extent, duration, and scope of incident (include information of systems, resources, and customers affected).
        • Magnitude of incident (include foregone revenue, losses, costs, investments, number of customers affected, implications, consequences to reputation).
        • Breach of regulatory requirements.
      3. Corrective and Preventive Measures
        • Immediate corrective action to address consequence of incident (priority on addressing customers).
        • Measures to address root cause.
        • Measures to prevent similar future occurrence.
    13. FIs should address all incidents within corresponding resolution timeframes, and monitor all incidents to their resolution.
  6. Problem Management
    1. Problem management aim to determine and eliminate root cause to prevent occurrence of repeated problems.
    2. FIs should establish roles and responsibilities, and identify > classify > priorities > address problems in a timely manner.
    3. FIs should define criteria to categorise problems by severity level, and establish target resolution time and escalation processes for each severity levels.
    4. Trend analysis of past incidents will help with problem identification.
  7. Capacity Management
    1. FIs should ensure indicators for systems and infrastructure such as performance, capacity, and utilisation are monitored and reviewed.
    2. FIs should establish monitoring processes and appropriate threshold to be able to cater additional resources in a timely manner.

8. Systems Reliability, Availability and Recoverability

  1. This is important as critical system failures can lead to widespread and disruptive impact, affecting reputation and confidence.
  2. FIs should define recovery and business resumption prioritities, test and practise its contingency procedures.
  3. System Availability
    1. Important factors are:
      • adequate capacity
      • reliable performance
      • fast response time
      • scalability
      • swift recovery capability
    2. FIs should develop built-in redundancies to reduce single point of failure. Should maintain standby hardware, software and network components for fast recovery.
    3. FIs should achieve high availability for critical systems.
      • High availability = Other than planned maintenance, downtime should be minimised with suitable resiliency solutions.
      • Critical system = system which will lead to significant impact to operations or customers if failed.
  4. Disaster Recovery Plan
    1. Recovery plan should include scenario analysis for contingency scenarios such as major system outages, hardware malfunction, operating errors, security incidents, and failure of primary DC.
    2. FIs should review and update recovery plan and incident response procedures at least annually or when there are operations, systems or network changes.
    3. FIs should implement rapid backup and recovery capabilities at individual system or application cluster level, considering inter-dependencies when creating recovery plan and contingency tests.
    4. FIs should define recovery and business resumption priorities with specific RTO and RPO.
      • RTO = time to restore a system disruption.
      • RPO = acceptable amount of data loss.
    5. FIs should establish a geographically separated recovery site to restore critical systems and resume business operations when primary site fails.
    6. Recovery speed requirements depend on criticality and available alternatives. FIs may explore on-site redundancy and real-time data replication to enhance recovery capability.
    7. For critical systems outsourced to offshore service providers, FIs should consider cross-border network redundancy, engaging multiple network providers, and alternate network path to enhance resiliency.
  5. Disaster Recovery Testing
    1. FIs should refrain from adopting impromptu and untested recovery measures during system outage, as they carry high operational risks without validating effectiveness.
    2. FIs should test the effectiveness of recovery requirements and ability of staff to execute the procedures at least annually.
    3. DR tests should cover various scenarios like total shutdown, primary site failure, and individual component failure.
    4. FIs should conduct bilateral or multilateral recovery testing for systems or networks linked to specific service providers.
    5. FIs should involve business users in designing test cases to verify recovered systems. FIs should also participate in DR tests conducted by its service providers.
  6. Data Backup Management
    1. FIs should develop data backup strategy for storage of critical information.
    2. FIs may implement DAS, NAS, or SAN as part of the data backup and recovery strategy. Processes should be in place to review storage architecture, connectivity, and technical support by service providers (refer to Appendix B).
    3. FIs should carry out periodic testing of backup media and assess if media is adequate and effective in supporting recovery processes.
    4. FIs should encrypt backup media (including USB disks) containing sensitive information before transporting to offsite storage.

9. Operational Infrastructure Security Management

  1. FIs should implement security solutions at data, application, DB, OS, and network layers to adequately address potential cyber attacks.
    • Cyber Attacks = phishing, DOS, spam, sniffing, spoofing, hacking, key-logging, MITM, malware.
  2. FIs should have appropriate measures to protect sensitive and confidential information (personal, account, transaction data). Customers should properly authenticate before accessing data. Secure data against exploits like ATM skimming, card cloning, hacking, phishing and malware.
  3. Data Loss Prevention
    1. Insider attacks (from current and ex-staff, vendors and contractors) are among the most serious risks. FIs should adopt adequate measures to detect and prevent unauthorised access, copy, or transmission of important and confidential data.
    2. FIs should have comprehensive data loss prevention strategy that considers:
      • Data at endpoint – notebooks, PC, portable storage, mobile.
      • Data in motion – across network, or transport across sites.
      • Data at rest – files, DB, backup media, storage.
    3. FIs should address risks of data theft, data loss and data leakage from endpoints. Confidential information should be stored with strong encryption.
    4. FIs should not use unsafe internet services to exchange confidential information, and implement measures to detect and prevent the use of such services.
    5. For exchanging confidential information with external parties, FIs should employ strong encryption with adequate key length, and send the encryption key in separate transmission channel. May also use other secure methods.
    6. Confidential information stored on IT systems should be encrypted with strong access controls and principle of “least privilege”.
      • least privilege = “need-to-have” basis.
    7. FIs should determine the appropriate media sanitisation method, depending of security requirement of data, to prevent loss of confidential information through disposal of IT systems.
  4. Technology Refresh Management
    1. FIs should maintain up-to-date inventory of software and hardware used in production and DR environments, including relevant warranty and support contracts.
    2. FIs should actively replace outdated and unsupported systems, as EOS products cease to have security patches for vulnerabilities.
    3. FIs should establish technology refresh plan to ensure that systems and software are replaced in a timely manner. Conduct risk assessment and risk mitigation for continued usage of systems approaching EOS.
  5. Network and Security Configuration Management
    1. FIs should configure systems and devices with expected level of security. Establish baseline standards to facilitate consistent security configurations across OS, DB, network devices and enterprise mobile.
    2. FIs should conduct regular enforcement review to ensure baseline standards are applied, with frequency of review which commensurate with the level of risks.
    3. FIs should apply anti-virus to servers. Update anti-virus definition files regularly and schedule automatic scans.
    4. FIs should install network security devices (firewalls, IDS, IPS) at critical infrastructure juncture to protect network perimeter. Deploy internal firewalls or similar measure to minimise security exposure to both internal and external network. Regularly backup and review network security rules to remain appropriate and relevant.
    5. FIs deploying WLAN should be aware of the risks and implement measures to secure network from unauthorised access.
  6. Vulnerability Assessment (VA) and Penetration Testing
    1. VA is the process to discover > identify > assess security vulnerabilities in a system. FIs should conduct VA regularly.
    2. FIs should deploy a combination of automated tools and manual techniques to perform comprehensive VA (include common web vulnerabilities for VA on web-based external facing system).
    3. FIs should establish process to remedy issues identified in VAs, and validate the success.
    4. FIs should conduct pen-test through simulating actual attacks to evaluate security posture of system. Pen-test on internet-facing system at least annually.
  7. Patch Management
    1. FIs should establish patch management procedures that identify > categorise > priorities security patches, and have implementation timeframe for each category.
    2. FIs should test security patches rigorously before deploying to production.
  8. Security Monitoring
    1. FIs should establish security monitoring systems and processes to promptly detect unauthorised or malicious activities by external and internal parties.
    2. FIs should implement network surveillance and security monitoring procedures with network security devices to be alerted of intrusions.
    3. FIs should implement security monitoring tools which detects changes to critical IT resources, to identify unauthorised changes.
    4. FIs should perform real-time monitoring of security events for critical systems.
    5. FIs should regularly review security logs of systems, applications and network devices for anomalies.
    6. FIs should adequately protect and retain system logs for future investigations. Retention period should consider statutory requirements.

10. Data Centres Protection and Controls

  1. It is important for DC to be resilient and physically secured.
  2. Threat and Vulnerability Risk Assessment (TVRA)
    1. TVRA identify security threats and operational weakness in a DC to determine the level and type of protection to be established.
    2. TVRA should consider various scenarios (theft, explosives, arson, unauthorised entry, external attacks, insider sabotage) and various factors:
      • criticality of DC
      • geographical location
      • multi-tenancy and type of tenants in DC
      • impact of natural disaster
      • political and economic climate of country
    3. TVRA scope should include:
      • review of DC’s perimeter and surrounding
      • building and facility, critical mechanical and engineering system
      • building and structural elements
      • daily security procedures
      • physical, operational, and logical access control
    4. FIs should obtain TVRA of provider DC, verify that report is current and provider is committed to address vulnerabilities identified, before selecting DC. TVRA should be performed during feasibility study when building FI’s own DC.
  3. Physical Security
    1. FIs should limit DC access to authorised staff only (principle of least privilege).
    2. FIs should ensure temporary access for non-DC personnel are properly notified, approved, and accompanied by authorised employee.
    3. FIs should ensure DC is physically secured and monitored, employing physical, human, and procedural controls where appropriate (security guards, card access systems, mantraps, bollards).
    4. FIs should deploy security systems and surveillance tools to monitor and record activities within DC. Have physical security measures to prevent unauthorised access to systems, equipment racks and tapes.
  4. Data Centre Resiliency
    1. FIs should assess redundancy and fault tolerance in:
      • electrical power
      • air-conditioning
      • fire suppression
      • data communications
    2. FIs should monitor and regulate environment within DC such as temperature and humidity. Escalate to management and resolve abnormalities detected in a timely manner.
    3. FIs should implement appropriate fire protection and suppression systems to control full scale fire. Includes:
      • smoke detectors
      • hand-held fire extinguishers
      • passive fire protection (e.g. fire wall)
    4. FIs should install backup power consisting:
      • uninterrupted power supplies
      • battery arrays
      • diesel generators

11. Access Control

  1. Three of the most basic internal security principles for protecting systems:
    • Never alone principle = critical systems functions and procedures are carried out by more than one person or at least checked by another person. Includes critical systems initialisation and configuration, PIN generation, creation of cryptographic keys, use of admin accounts.
    • Segregation of duties principle = design transaction processes so that no single person may initiate, approve, execute, and enter transactions into a system for fraud. Job rotation for security administration. Responsibilities for the following should be performed by separate groups:
      • OS functions
      • systems design and development
      • application maintenance
      • access control administration
      • data security
      • librarian and backup data file custody
    • Access control principle = only grant access rights on principle of least privilege, regardless of rank or position. Only provide authorisation for legitimate purposes.
  2. User Access Management
    1. FIs should only grant access on need-to-use basis and within required period. Ensure that resource owner authorise and approve the access.
    2. External parties given access to critical systems should be subjected to close supervision, monitoring, access restrictions similar to internal staff.
    3. FIs should ensure user access are uniquely identified and logged for audit and review purposes.
    4. FIs should regularly review user access privileges to verify that privilege is appropriate, and identify dormant or wrongly provisioned accounts.
    5. FIs should enforce strong password controls that include:
      • change of password on first logon
      • minimum password length and history
      • password complexity
      • maximum validity period
    6. FIs should ensure no one has concurrent access to production and backup systems, and access to backup systems should only be for specific reason and period.
  3. Privileged Access Management
    1. FIs should apply stringent selection criteria and thorough screening when appointing staff for critical operations and security functions.
    2. These staff (system admin, security officers, programmers) are capable of severely damaging critical systems by virtue of their privilege access.
    3. FIs should closely supervise these staff, log and review their system activities, and adopt the following controls and security practices:
      • strong authentication mechanism (e.g. 2FA).
      • strong control over remote access.
      • restrict number of privilege users.
      • Grant privilege access on “need-to-have” basis.
      • Maintain audit logging of system activities.
      • Disallow privilege user access to logs of systems they are accessing.
      • Review activities on a timely basis.
      • Prohibit sharing of accounts.
      • Disallow vendors and contractors privilege access without close supervision.
      • Protect backup data from unauthorised access.

12. Online Financial Services

(Refers to provision of banking, trading, insurance, other financial services and products via electronic delivery channels)

  1. FIs should recognise the risk of offering services via internet platform.
  2. Varying degree of risks are associated with different types of services:
    • information service
    • interactive information exchange service
    • transactional service (highest risk due to irrevocable execution)
  3. FIs’ risk management process should clearly identify the risks and formulate security controls, system availability, and recovery capabilities which commensurate with the level of risks.
  4. Online Systems Security
    1. FIs should devise security strategy to ensure confidentiality, integrity, and availability of data and systems.
    2. FIs should assure customers that online services are adequately protected and authenticated.
    3. MAS expects FIs to properly evaluate security requirements associated with internet systems and adopt well-established international encryption standards (refer to Appendix C).
    4. FIs should ensure information processed, stored, or transmitted are accurate, reliable and complete, by implementing physical and logical access security, processing and transmission controls.
    5. FIs should implement monitoring or surveillance system to be alerted of abnormal system activities, transmission errors, or unusual transactions, and have follow-up process to verify the issues are addressed.
    6. FIs should maintain high resiliency and availability, put in place measures to plan and track capacity utilisation and guard against online attacks (refer to Appendix D).
    7. FIs should implement 2FA login and transaction-signing. These secure authentication process, protect data integrity, and enhance customer confidence.
    8. For systems serving institutional investors, accredited investors or corporate entities, using alternate controls and processes to authorise transactions, FIs should perform risk assessment to ensure security level is at least as adequate as token-based mechanisms.
    9. FIs should take appropriate measures to minimise exposure to other cyber attacks such as MITM, MITBrowser, MITApplication (refer to Appendix E).
    10. FIs should implement measures to protect customers, educate them on the measures put in place, and ensure they have access to continual education to raise security awareness (refer to Appendix F).
  5. Mobile Online Services and Payments Security
    1. Mobile Online Services refers to provision of financial services via mobile devices, either through web browser or FI’s self-developed applications on mobile platforms (Apple iOS, Google Android, Microsoft Windows OS).
    2. Mobile Payment refers to use of mobile devices to make payments, which may use various technologies (e.g. NFC).
    3. Both are extensions of online financial services. FIs should implement similar security measures as online financial services, conduct risk assessment and implement appropriate measures to counteract payment card fraud on mobile devices.
    4. FIs should ensure protection of sensitive or confidential information as mobile devices are susceptible to theft and loss. Implement encryption to secure data in storage and transmission, and ensure processing are done in secure environment.
    5. FIs should educate customers on security measures to protect their own mobile devices from malware.

13. Payment Card Security (Automated Teller Machines, Credit and Debit Cards)

  1. Payment cards allows physical purchase, online purchase (and over mail-order or over telephone) and ATM cash withdrawals.
  2. There are many forms of payment cards. Magnetic stripe cards are vulnerable to skimming attacks, which can take place during payment card processing (at ATMs, payment kiosk, EFTPOS terminals).
  3. Payment card frauds include:
    • counterfeit
    • lost or stolen
    • card-not-received (CNR)
    • card-not-present (CNP)
  4. Payment Card Fraud
    1. FIs offering payment card service should protect sensitive data. Implement encryption to secure data in storage and transmission, and ensure processing are done in secure environment.
    2. FIs should use secure chips to store sensitive data and implement strong authentication methods such as dynamic data authentication (DDA) or combined data authentication (CDA). Should not use magnetic stripe to store sensitive data. If interoperability concerns require the use of magnetic stripe for transactions, ensure adequate control measures are implemented.
    3. For transactions using ATM cards, FIs should perform authentication of sensitive customer information (not third party service provider). FIs should perform regular security reviews on infrastructure and processes used by service providers.
    4. FIs should ensure security controls on payment card systems and network.
    5. FIs should only activate new payment cards upon obtaining customer’s instruction.
    6. FIs should implement dynamic OTP for CNP transactions via internet to reduce risk.
    7. FIs should promptly notify cardholders when withdrawals or charges exceeding customer-defined threshold is made. Alert should include transaction source and amount.
    8. FIs should implement robust fraud detection systems with behavioural scoring or equivalent, and correlation capabilities. FIs should set out risk management parameters according to risk posed by cardholders, nature of transactions or other risk factors.
    9. FIs should investigate transactions that deviates significantly from cardholder’s usual usage patterns and obtain cardholder’s authorisation before completing transactions.
  5. ATMs and Payment Kiosk Security
    1. ATMs and payment kiosks (e.g. SAM and AXS) are targets of card skimming attacks.
    2. FIs should consider the following measure to secure consumer confidence in using these systems:
      • anti-skimming solutions to detect foreign devices placed over or near card entry slot.
      • detection mechanism that sends alerts to FI staff for follow-up responses and actions.
      • tamper-resistant keypads to ensure customers’ PIN are encrypted during transmission.
      • appropriate measures to prevent shoulder surfing of customers’ PINs.
      • Video surveillance of activities at the machines and maintain quality CCTV footage.
    3. Verify adequate physical security are implemented in third party payment kiosks which accept and process FI’s payment cards.

14. IT Audit

  1. FIs need to develop effective internal control systems to manage technology risks.
  2. IT audit provides Board and SM independent and objective assessment of the effectiveness of controls to manage technology risks.
  3. FIs should establish organisational structure and reporting lines for IT audit in a way that preserves the independence and objectivity.
  4. Audit Planning and Remediation Tracking
    1. FIs should ensure IT audit scope is comprehensive and includes all critical systems.
    2. IT audit plan comprising auditable IT areas for the coming year should be developed, and approved by the FI’s Audit Committee.
    3. FIs should establish audit cycle and determine the frequency of IT audit that commensurate with criticality and risk of IT system or process.
    4. Follow-up process to track and monitor IT audit issues, and escalation process to notify IT and business management of key issues should be established.



I will be publishing appendix A-F of the TRMG in a separate blog post.

Analytics 001: Power BI walkthrough

All the consultants out there are talking about deriving more insights from your “data” and your own top management are also singing in unison to move towards a “data-driven” future. No doubt, “Qlik” and “Tableau” had definitely been brought up in these conversations. You must be thinking, “Seriously? These software are gonna bring change to my company? Oh please!”.

Well I had that exact same thought. So I decided to explore these Business Intelligence (BI) tools further, starting with Microsoft Power BI. Why? Because it is free!

What is Power BI?

Power BI logo

Check it out from the official site:

I would say it is Microsoft’s attempt to compete with the two leading BI tools by leveraging on its strengths in Office Suite and Cloud Services. My first impression of Power BI was pretty good. The start-up is fast, the UI is similar to a typical Office software, and making changes to the dataset is also smooth. If you are a seasoned Excel user, then learning to use Power BI should be a breeze for you. As an absolute noob, you might have to go through the tutorials offered by Microsoft and hangout in the Power BI forum.

However, after using the tool for a while, I come to realise that all the hype over “Self-Service” BI does not equate to ease of use or intuitiveness. The tools merely combined some basic data extraction and manipulation with the ability to create graphs and charts. It is definitely not designed for the ordinary users who do not have an inkling of how data set can form relations, perform table join operations, and how the data can be presented objectively. Even though it is not necessary, having some foundation in data science will certainly help.

Cloud, Desktop, and Mobile

The Power BI desktop client is the application designed primarily to interact with the data and build reports. Power BI also provides a cloud service that can perform the required BI analysis, although the main objective of this service is to publish and share any reports that you have built. The mobile application is designed for users to read the published reports from the cloud using their mobile devices.

So for someone who would like to explore the analysis capabilities of this tool, installing the desktop client is a must.

Report Building Process

This is generally the sequence of actions for building a new report from scratch. I have yet to figure out how to create a report development workflow that will apply existing report templates to new or updated data sources. The following are performed with Power BI Desktop.

  1. Connect to Data Source
    In my case, I worked with a couple of Excel spreadsheet. You may consider connecting directly to databases. After connecting, the tool will extract/query all the available data tables.
  2. Modify the Data
    The data table may not be in a desirable format to produce any useful visualisation. Use the Query Editor to manipulate the data without modifying the source. It provides a number of useful functions, it processes the familiar Excel formula language (DAX or Data Analysis Expression), and also can execute your custom scripts on the data.I noticed that every change performed on the data is recorded as a layer (similar to Photoshop). You may go up and down the stack and modify or remove any layers you wish. This comes in handy when you are trying to explore/clean up your data but do not want to commit the changes until you are confident.
  3. Relationship View
    One of the three views that Power BI desktop provides. This allows you to create relationship between different data tables (outputs from step 2). Note that relationships only makes it easier for you to find information across tables pertaining to specific records using related fields between tables. It is not a join operation and it will not provide any benefit that a table join otherwise will provide. (Table joins should be performed at step 2).
  4. Data View
    I honestly feel that this view provides the same functionality as Query Editor, which makes me wonder what I should do with this view. My guess is for any further modification of the data that you failed/forgotten to perform in step 2, you may perform it in this view.Some the things you can do to your data includes filtering, data transform, data transpose, changing data format, data processing using DAX, aggregation, table joins, adding and removing fields and records, etc.
  5. Report View
    When you have finally gotten all your data prepared, the report view will assist you to create visualisations from the data and assemble it into a report, that can be published on the cloud service.In the report view, all that you should be doing is to match data and present them using the variety of charts. You should not attempt to manipulate the data within the report view. Of course, even if you are not prepared, you can revisit any of the previous steps and make the necessary adjustments.
  6. Publish to Cloud
    And after all your hard work, you may publish the report to the cloud service and share your creation with others. You may also choose to modify the Phone view so that the report may be accessed from a mobile device using the Power BI mobile app.
  7. Interactive Visuals
    What I like about Power BI is the interactive visualisations that automatically filter values and changes the charts dynamically as you click on it. You are also able to drill down into deeper details of the data.

Additional Helpful Tips

The following information may get into the details of using Power BI. These tips were very helpful when I was learning to use Power BI, and I will just park them here for future reference.


Data Preparation

Data Modelling


Business 001: Digital Technology and Retail Malls

Traditionally, location and tenant mix are the two most important factors that determine the success of retail mall businesses. Even though these factors remain important, the use of digital technology is allowing weaker players to level the playing field in the industry.

Facing Digital Disruption

Retail mall business is only concerned with the management of physical space in the past. However, with the proliferation of online retailing, the industry struggles to handle digital disruption, with various malls adopting a spectrum of strategy. At one extreme, we see retail malls embracing the online platform by rolling out “click & collect” programs, allowing shoppers to make purchases online but collect the goods at the physical store. At the other extreme, there are malls which reject digital technology entirely. Most retail malls adopt some forms of digital technology to complement their business, but at this stage, there is no indication which strategy will come to dominate the industry, and all malls are expected to tread carefully into this area.

Connecting with Shoppers

Omni-channel marketing has been gaining attention in the industry. Technology is empowering retailers to connect with potential customers with more targeted and personalised messages, across multiple digital platforms. Compared to Above The Line (ATL) marketing channels, digital marketing is likely to be more cost effective. The software tools to design, orchestrate, execute and analyse omni-channel marketing campaign is also readily available in the market. Should the mall management not have any IT expertise, these software are also available in cloud subscription models.

However, even if a retail mall wish to embrace such technology, the direct impact on retail mall revenue is still limited. The mall would ultimately require cooperation of its tenants as the core content of all marketing campaign is still made up of the products and services of the shops in the mall.

Loyalty program is another avenue for retail malls to grow their business organically. You can see that almost all of our local retail malls have some form of loyalty program. This is a great initiative for the mall, as a little bit of investment into a Customer Relationship Management (CRM) system to hold the data of all the loyal members will allow the malls to gain deeper insights through analytics and provide leads for future marketing campaign. The loyalty program also provides great value and cost efficiency, as any promotions, e.g. “10% off” will yield the desired outcome and limit the cost to strictly 10% of sales, or what ever the ratio depending on the promotion.

Besides the CRM system, technology plays a part in helping the retail malls acquire and retain loyal members through websites and mobile applications. The key to success is to provide great user experience, keep the contents fresh, and deliver proper information to the right target audience.

Expectations of Tenants

The expectations of tenants have also evolved, creating another set of challenges for the retail malls. The trend of pop-up store is expected to pick up, as more online retailers look to physical stores to complement their sales activity, such as organising flash sales, set up show room and etc. Traditional lease duration and lease terms do not appeal to this new segment of tenants, which will be looking for shorter leases and lower rentals. However, as retail malls cater to the needs of online retailers, the management must take note of the potential backlash from existing tenants. This poses a dilemma which may eventually cause a paradigm shift in the business model of retail malls.

Most retail mall rental rates consist of a variable component that adjusts according to the sales turnover of the tenant. However, as shops are increasingly looking to close sales through their online platform, the physical shop spaces in the malls will function as show rooms and storage space instead of a venue for transaction. The malls will no longer be able to accurately measure the sales performance of their tenants and price the rental accordingly. This is certainly a tough problem with no clear resolution available.

Data Integration 005: Talend, Mapping with tMap


Talend job that matches data from 2 files and filter out specified records, using tMap.

In this example, I am creating a job that would read from two separate CSV files, one is a list of employee mobile phone numbers, the other is a list of employment status of all employees. After matching a record against both documents, if the employee is still active, the employee mobile phone number shall be updated in a MS SQL DB. (More details on how to connect to MS SQL Server here.)

Talend has provided an extremely convenient component to achieve my objective, which is the tMap component. This tMap component steps through the data records in my employee mobile number file (Main), and match the specified primary key against the reference file (Lookup), which contains employee status. When the status of a particular record meets my condition, it will be generated as an output from tMap, which can then be loaded into the DB by a separate DB input component.


Mapping data inputs and outputs.

The tMap schema editor presents a simple drag and drop UI for you to relate the input and output data. To filter for certain conditions in the data, you will have to manipulate the data using expression (which appears to be in the Java language). More introduction on the tMap expression can be found here. In my example, I actually have two outputs. The second output catches all the data records rejected by the first output condition (Catch output reject = True), and presents the data to me in the console using tLogRow component.


Schema of output to be loaded into DB.

I did run into a small problem when I run the job. The DB table allowed the field “isPrimary” to be nullable even though it holds the type Boolean. Naturally I have used the DB table schema to design the output of my Talend job, however Talend does not recognise Null value for the type Boolean. I will have to explicitly specify that the Boolean data field in my output is Nullable. (Which makes me wonder, why doesn’t Talend enforce this instead of allowing the user to specify otherwise and throwing up error?)


Data Integration 004: Talend, Connect to MSSQL

I ran into loads of problems trying to connect to a Microsoft SQL Server (2008 R2) using my Talend job to update a DB.

(Disclaimer: there was no network issues, the server was definitely reachable.)

The MSSQL Connector that does not work

First, I tried to use tMSSqlConnection to connect to the server, but the job could not even compile due to a missing library mssql-jdbc.jar. Fine, typically the open studio will prompt me to install the additional required libraries after I agree to some terms and conditions, but No! This time round the “download and install” button for said library was totally greyed out, there was no way I could download it from within the studio. After taking advise from forums and Stack Overflow, I decided to manually download the required Java Archive file and install it. Honestly, I am not even sure if it is the right file. As expected, it could not work.

Next, I turned to the tOleDbConnection as recommended by some of the posts in the forums. Nice! It managed to build and run but it threw up an error. I could not tell what was wrong with the connection from the open studio console, so I started to look at the access logs from the SQL Server Management Studio (SSMS). “The login is a SQL Server login and cannot be used with Windows Authentication”. It turns out that my SQL server was configured for Windows Authentication only, but the Talend connector inherently do not support Windows Authentication.

Some online sources had recommended that changing the authentication mode was simple enough, except that you need to restart the server. It was between this option and trying to enable Windows Authentication on the Talend connector, which I assumed was not going to be simple as it was not provided out-of-the-box. I was like, let’s go for the easy way.

Mixed Authentication Mode

And here is where the nightmare begins. After changing the setting to mixed authentication mode, the Talend connector could successfully connect to the server. However, at the same time all other existing applications that connects to DBs in the server all broke. Man, I did not see that coming. The “Mixed Authentication” had misled me into thinking that restarting the server was a good idea. There was simply too many applications for me to troubleshoot alone. The best way out of this is to revert back to the original configuration. (I did perform a clone of the server VM in the event that the development doesn’t go well, I will restore the entire server instance from the clone.)

For some weird reasons, I could not log in to the server using SSMS no matter how I tried. Then it dawn on me that the admin account was a Windows account, and if the Windows Authentication mode had been denied, I would need to use another admin account permitted for SQL Server Authentication mode. Nope. There is no such admin account (not that I know of. Did not assume there was a default admin account and that people were lazy enough to have kept that account accessible.). At this point I wished for a detonation button to wipe the server off the face of the Earth.

So there is no way for me to change the authentication mode at all. There is only one last thing to do. Restore the entire server instance to some point back in time.

And that is how the episode ended. Eventually I had to stick with using tOleDbInput with Windows Authentication. This requires the janet-win<32 or 64>.dll (depends on whether you have a 32bit or a 64bit system) to be available in the java.library.path of your JVM. I am very thankful for this guide providing all the required information.

Update or Insert

The job worked well. I wanted to modify a table in the DB on the condition that if a record do not exist, a new record shall be inserted, else the existing record will be updated. This is when I come across the convenient option:


In my scenario, I will not often have to insert a new record, therefore the “update or insert” option is more efficient. More difference between the options are explained here.

Build and Export

After completing the job, I built and exported it as a Windows batch file (.bat) to run on a production system. That it where I run into another roadblock. After triggering the batch script with the Windows Task Scheduler, the console kept throwing an error, “UnsatisfiedLinkError: somepath\janet-win64.dll: Can’t find dependent libraries”. This does not mean that the system cannot find janet-win64.dll, as I have also imported that, but those Dynamic-Link Libraries (DLLs) that janet-win64 depends on that were missing.

That is just crazy to me, as my development environment was made from an exact VM clone of the production instance. How could there ever be anything missing? I googled so hard for two hours, then it finally hit me. I installed SSMS in the development server. And that installation, which the Talend job does not depend on, had also installed the latest Visual C++ Redistributable packages, which enabled janet-win64.dll to function and in turn enabled the Talend job. It totally blew my mind that these unrelated actions enabled the success of the Talend job. There was also no way for me to discover this dependency during development as I had installed SSMS before starting work on the Talend job.


Do not change the SQL server authentication mode if there are live connectors running. Always keep track of changes made in a server even though it may be unrelated to the development.

Data Integration 002: Talend, Working with FTP

Working with FTP is simple enough. This is the first project I have attempted using Talend, to download CSV files from an FTP server over the internet, perform some data manipulation, and finally upload the modified files into another folder in the same FTP server.


The Structure

The process of my Talend job is as follows:

  1. Using a tPreJob, I begin reading from a config file to obtain the key parameters for this project, and perform a tContextLoad.
  2. The main job starts with a tJava SubJob. It contains my custom java logic to redirect stdout and stderr for writing log files, set up any necessary global variables, or anything you want. It is flexible.
  3. The deactivated SubJob contains a tContextDump for me to read the context (or config in my own terms) that I have loaded for this job, and a tLogRow connected via Iterate to write into the stdout.
  4. Next, tFTPConnection connects to the FTP server and locate the desired folder, and tFTPGet downloads the files.
  5. tStatCatcher is used to read the statistic of any components that have checked the stat catcher option. (honestly I still have not figured out how to use this)
  6. Connecting tDie OnSubJobError to SubJobs will kill the main job when that particular Subjob fails, and throw out any error messages that you have set.
  7. tPostJob will be executed at the end, regardless of whether an error killed the Job or not. I used it to tFTPClose the FTP connection and clean up any streams that have been opened.

Works perfectly fine. However, I am uncertain if this is the best way to structure a job. Will continue to improve on it.

Here is part 2 of the job. This will upload the file back to the FTP server. I have split it up into two distinct jobs so that they can be reused for other projects independently. Reusability is also my top consideration when I decided to use context loading for these two jobs.