Friday, June 4, 2021

Microlearning

 Microlearning is an educational strategy that breaks complex topics down into short-form, stand-alone units of study that can be viewed as many times as necessary, whenever and wherever the learner has the need. Microlearning instructional modules are designed to be consumed in about five minutes and address one specific skill or knowledge gap topic. 

The convenience of microlearning, from both the learner and the educator’s point of view, has made this type of instructional delivery  popular in corporate learning environments. Scientific research suggests that a self-directed, modular approach to talent pipeline development improves knowledge retention. It also empowers employees by giving them the opportunity to build new skills directly in the context of the job they are being paid to do, without having to take time away from their job to attend training.

Although microlearning is most often associated with independent learning, modules can also be strung together to create guided learning experiences for individuals or small groups. The small chunks of instructional content can be tagged with metadata for easy search, access and reuse.

How does microlearning work?

In any given module, the learner is typically given 3-6 minutes to learn one specific objective by completing an action item such as: 

  1. Watching a short instructional video and answering a question.
  2. Playing an online learning game designed to teach a specific task. 
  3. Reading an executive summary and answering a short series of questions. 
  4. Viewing an infographic and answering a short series of questions. 
  5. Using virtual flashcards to prepare for a quiz.
  6. Virtually participating in a scenario-based simulation.

Microlearning modules are most often accessed as the need for knowledge arises, but they can also be assigned as part of an employee’s monthly or quarterly goals. Instructional modules, which are tagged with metadata that describes the module's learning objective, are typically stored in a library that is accessed through a mobile app, learning experience platform (LXP), public website or proprietary online knowledge base.

Advantages of microlearning

A microlearning approach to staff development can successfully address the problems of monolithic training platforms in a more natural manner. Perhaps one of the biggest advantages of microlearning is that the student can conduct a learning session at any time, from anywhere, using any type of computing device. 

In the past, a lot of e-learning initiatives were built around a macro-learning format that is commonly referred to as a MOOC (massive open online course). In corporate settings, MOOC content was often video-focused and the content was delivered through a learning management system (LMS) overseen by the organization’s human resource (HR) department or Chief Learning Officer (CLO).

While long-form presentations seemed to work well for high-level introductory material, employees often found it challenging to retain the information they received during marathon training sessions. Some HR managers also received pushback for this type of “just-in-case” training because it required employees to be pulled away from their daily work.

Another advantage of using a microlearning strategy is that short-form content is easier to update than long-form content. Having the ability to easily update learning modules is an important concern for educating employees in highly regulated industries such as finance and healthcare, because information in these two vertical industries constantly changes. When educational content is created in small, bite-sized modules, it can easily be updated to reflect new laws and regulations. 

Recently, the U.S. Securities and Exchange Commission determined that a firm can be guilty of security compliance violations for simply having an ineffective training program. Ensuring that the organization's training materials are always up-to-date  is another important driver for microlearning in the enterprise. 

Disadvantages of microlearning

Though microlearning is an effective learning strategy for reinforcement and retention, it is an inefficient approach to education for learners who need to gain mastery over a broad topic in a short period of time or acquire knowledge about a concept that cannot be broken down easily. 

In this type of learning scenario, a microlearning approach might even be harmful, especially if the learner lacks the necessary background to supply context and relate one learning objective to another.  Consider a university-level course in organic chemistry, for example. If every learning objective was broken down into lesson chunks of ten minutes or less, it’s likely that many students would struggle to master the material.

Thursday, June 3, 2021

Security Training

 Humans are considered as the first line of defence in the cyber security posture of organisations today. By offering security awareness training programmes, businesses can educate their employees about a range of growing cyber security risks and what to do if they notice one.

With cyber criminals increasingly targeting businesses and their employees, security awareness training is more important than ever. But despite this, users often pay little attention to cyber training and end up putting their organisation’s security at risk as a consequence. So, how can security teams get employees to take training seriously?

Developing a security culture

Getting staff to understand the importance of security training for themselves and the entire organisation is a major challenge currently faced by employers. Security training, obviously, is a difficult one to tackle. It often has a negative connotation associated with it and so trying to convince employees that this training is important not just for the organisation, but also helpful for themselves, could itself be a challenge.

So a culture shift is needed to solve this problem. In that a security culture is developed within the organisation. This will help employees get onboard with security-related efforts such as training.


Outcome-based contracting sees uptick post pandemic

 Outcome-based contracting has become more popular during the COVID-19 pandemic as organizations look to share risk with service providers.

Industry executives reported increased interest in this contracting approach, which links some, or all, of a service provider's payment to meeting performance objectives. Outcome-based contracting has existed for decades but appears to be gaining ground in the COVID-19 era. That contracting shift could affect service provider business models.

Forty-seven percent of the 200 senior executives polled by Boston Consulting Group (BCG) expect increased use of outcome-based contracts. The management consulting firm's study, "Postpandemic Outsourcing Trends for CEOs," noted that companies "have to change the nature of contracts so that they share more risks and rewards with service providers."

'Skin in the game'

If partners have yet to encounter outcome-based approaches, they may soon. The BCG report found that 62% of respondents are likely to renegotiate their service provider contracts in 2021.

Business continuity and resiliency are among the outcomes companies look for as they respond to COVID-19 and prepare for future disruptions with adaptive strategies.

"There is a lot more emphasis on how the vendor and service provider can put skin in the game in terms of keeping the lights on," said Hrishi Hrishikesh, partner and director of digital transformation at BCG and one of the report's authors. "That is more important with COVID."

Customers are structuring contracts in a couple of ways to share risk and reward with service providers, Hrishikesh noted. One method puts fees at risk. That is, a portion of the service provider's fees may not be paid if the provider fails to meet or deliver the contract's specified outcomes. The other approach is gain sharing, in which a customer agrees to share a portion of the upside if the provider exceeds the agreed upon outcomes. The gain sharing fee is over and above the provider's fee for satisfying the contract's basic requirements.

In both contract structures, outcomes could be outputs such as the number of user stories delivered per agile sprint or business outcomes such as quicker processing times, Hrishikesh said. Customers often measure outcomes using specific metrics, with targets and thresholds explicitly written into contracts, he added.

At IOpex Technologies, an IT services firm based in San Jose, Calif., the pandemic has led customers to ask for help with improving systems or processes. A call center automation project, in which digital workers are deployed to collaborate with the human workforce, can result in a leaner operation and significant cost savings, IOpex chief digital officer Nagarajan Chakravarthy said.

"Such projects naturally lend themselves to an outcome-based engagement model in which the customer lets IOpex take a share of the cost savings through risk/reward-based contracts," Chakravarthy said.

This approach has emerged in digital transformation engagements, especially as the pandemic accelerated automation- and cloud-based projects, he added.

Due to the push around automation-led transformation, the company launched its OpexWise toolkit. The offering brings together automation, operational platforms and cloud adoption to facilitate digital transformation, according to the company.

IOpex sees many initiatives that involve robotic process automation, chatbots, application modernization and cloud adoption to drive digital transformation, Chakravarthy noted.

Outcomes as a service

Paul Wilkinson, executive vice president at 1901 Group, a Reston, Va., MSP and wholly owned subsidiary of Leidos, equated outcome-based contracting with as-a-service purchasing among public sector customers. Agencies are turning to enterprise IT as a service, divesting themselves of capital expenditures for items such as network infrastructure, endpoint hardware and software, and end-user support, he said.

In this context, public sector customers are buying "outcomes" -- managed desktops, managed storage, managed compute, managed enterprise networks, and managed voice and unified communications, to name several, Wilkinson said.

The public sector as-a-service trend has taken off over the last three years, but COVID-19 fueled its growth. "The pandemic has certainly increased demand for [cloud services providers], with new resources being acquired for data analytics, VPN, collaboration solutions ... and remote support," he noted.

While the as-a-service model is the general direction, buying patterns can vary. For example, agencies may want to use fixed-unit-rate or firm-fixed price contracting, according to Wilkinson. A fixed-unit rate provides elasticity, because the price per unit can increase or decrease based on consumption. A firm-fixed price, meanwhile, generally lacks elasticity. This approach may be used to deliver a defined type and quantity of service where the price doesn't change -- a reserved instance of a cloud service, for example.

"We see a variety of [as-a-service contracting], and it is on the rise," Wilkinson added.


Thursday, May 20, 2021

Pen Test

A pen test: what's it?

A penetration test is a cybersecurity technique organizations use to identify, test and highlight vulnerabilities in their security posture. These penetration tests are often carried out by ethical hackers. These in-house employees or third parties mimic the strategies and actions of an attacker in order to evaluate the hackability of an organization's computer systems, network or web applications. Organizations can also use pen testing to test their adherence to compliance regulations.

Ethical hackers are information technology (IT) experts who use hacking methods to help companies identify possible entry points into their IT infrastructure. By using different methodologies, tools and approaches, companies can perform simulated cyber attacks to test the strengths and weaknesses of their existing security systems. Penetration, in this case, refers to the degree to which a hypothetical threat actor, or hacker, can penetrate an organization's cybersecurity measures and protocols.

There are three main pen testing strategies, each offering pen testers a certain level of information they need to carry out their attack. For example, white box testing provides the tester all of the details about an organization's system or target network; black box testing provides the tester no knowledge of the system; and gray box penetration testing provides the tester partial knowledge of the system.

Pen testing is considered a proactive cybersecurity measure because it involves consistent, self-initiated improvements based on the reports generated by the test. This differs from non-proactive approaches, which lack the foresight to improve upon weaknesses as they arise. A non-proactive approach to cybersecurity, for example, would involve a company updating its firewall after a data breach occurs. The goal of proactive measures, like pen testing, is to minimize the number of retroactive upgrades and maximize an organization's security.

Pen testing is often conducted with a particular goal in mind. These goals typically fall under one of the following three objectives:

  1. identify hackable systems
  2. attempt to hack a specific system
  3. carry out a data breach

Each objective focuses on specific outcomes that IT leaders are trying to avoid. For example, if the goal of a pen test is to see how easily a hacker could breach the company database, the ethical hackers would be instructed to try and carry out a data breach. The results of a pen test will not only communicate the strength of an organization's current cybersecurity protocols, but they will also present the available hacking methods that can be used to penetrate the organization's systems.

Why is pen testing important?

The rate of distributed denial-of-service, phishing and ransomware attacks is dramatically increasing, putting all internet-based companies at risk. Considering how reliant businesses are on technology, the consequences of a successful cyber attack have never been greater. A ransomware attack, for instance, could block a company from accessing the data, devices, networks and servers it relies on to conduct business. Such an attack could result in millions of dollars of lost revenue. Pen testing uses the hacker perspective to identify and mitigate cybersecurity risks before they are exploited. This helps IT leaders implement informed security upgrades that minimize the possibility of successful attacks.

Technological innovation is one of, if not the greatest, challenge facing cybersecurity. As tech continues to evolve, so do the methods cybercriminals use. In order for companies to successfully protect themselves and their assets from these attacks, they need to be able to update their security measures at the same rate. The caveat, however, is that it is often difficult to know which methods are being used and how they might be used in an attack. But, by using skilled ethical hackers, organizations can quickly and effectively identify, update and replace the parts of their system that are particularly susceptible to modern hacking techniques.

How to do penetration testing

Pen testing is unique from other cybersecurity evaluation methods, as it can be adapted to any industry or organization. Depending on an organization's infrastructure and operations, it may want to use a certain set of hacking techniques or tools. These techniques and their methodologies can also vary based on the IT personnel and their company standards. Using the following adaptable six-step process, pen testing creates a set of results that can help organizations proactively update their security protocols:

  1. Preparation. Depending on the needs of the organization, this step can either be a simple or elaborate procedure. If the organization has not decided which vulnerabilities it wants to evaluate, a significant amount of time and resources should be devoted to combing the system for possible entry points. In-depth processes like this are usually only necessary for businesses that have not already conducted a complete audit of their systems. Once a vulnerability assessment has been conducted, however, this step becomes much easier.
  2. Construct an attack plan. Prior to hiring ethical attackers, an IT department designs a cyber attack, or list of cyber attacks, that its team should use to perform the pen test. During this step, it is also important to define what level of system access the pen tester has.
  3. Select a team. The success of a pen test depends on the quality of the testers. This step is often used to appoint the ethical hackers that are best suited to perform the test. Decisions like these can be made based on employee specialties. If a company wants to test its cloud security, a cloud expert may be the best person to properly evaluate its cybersecurity. Companies also often hire expert consultants and certified cybersecurity experts to carry out pen testing.
  4. Determine the stolen data type. What is the team of ethical hackers stealing? The data type chosen in this step can have a profound impact on the tools, strategies and techniques used to acquire it.
  5. Perform the test. This is one of the most complicated and nuanced parts of the testing process, as there are many automated software programs and techniques testers can use, including Kali Linux, Nmap, Metasploit and Wireshark.
  6. Integrate the report results. Reporting is the most important step of the process. The results must be detailed so the organization can incorporate the findings.




Tuesday, May 18, 2021

AI In Education: Current status and Potential

Artificial intelligence (AI) and machine learning are part of the emerging technologies that have begun to alter education tools and institutions and changing what the future might look like in education. It is expected that artificial intelligence in Education in the West will grow by 47.5% from 2017-2021 according to the Artificial Intelligence Market in the US Education Sector report. Even though most experts believe the critical presence of teachers is irreplaceable, there will be many changes to a teacher’s job and to educational best practices. 

This paper seeks to provide an overview of research on AI applications in higher education through a systematic review. The primary emphasis of this study is on educational implications of emerging technologies on the way students learn and how institutions teach and evolve. The synthesis of results presents four areas where AI is applied: 1. Teacher and AI collaboration, 2. Differentiated and individualized learning, 3. Universal access for all students, and 4. Tutoring and support outside the classroom. In the process, this study highlights certain challenges for institutions of higher education and student learning in the adoption of these technologies for teaching, learning, student support, and administration and explore further directions for research.

Teacher and AI collaboration: where are the educators?  

Whilst AI has been around for about 30 years, it is still unclear for educators how to make pedagogical advantage of it on a broader scale, and how it can actually impact meaningfully on teaching and learning in higher education.  Nevertheless, AI has already been applied to education primarily in some tools that help develop skills and testing systems. As AI educational solutions continue to mature, the hope is that AI can help fill needs gaps in learning and teaching and allow schools and teachers to do more than ever before. AI can drive efficiency, personalization and streamline admin tasks to allow teachers the time and freedom to provide understanding and adaptability—uniquely human capabilities where machines would struggle. By leveraging the best attributes of machines and teachers, the vision for AI in education is one where they work together for the best outcome for students. Since the students of today will need to work in a future where AI is the reality, it’s important that our educational institutions expose students to and use the technology.     

Differentiated and individualized learning

Adjusting learning based on an individual student’s particular needs has been a priority for educators for years, but AI will allow a level of differentiation that’s impossible for teachers who have to manage 30 students in each class. There are several Universities such as Intelligent Computer Tutoring Group (ICTG) of University of Canterbury in NZ and Carnegie Learning in the US currently developing intelligent instruction design and digital platforms that use AI to provide learning, testing and feedback to students from pre-K to college level that gives them the challenges they are ready for, identifies gaps in knowledge and redirects to new topics when appropriate. As AI gets more sophisticated, it might be possible for a machine to read the expression that passes on a student's face that indicates they are struggling to grasp a subject and will modify a lesson to respond to that. The idea of customizing curriculum for every student's needs is not viable today, but it will be for AI-powered machines.  

Universal access for all students     

Artificial intelligence tools can help make global classrooms available to all including those who speak different languages or who might have visual or hearing impairments. Presentation Translator is a free plug-in for PowerPoint that creates subtitles in real time for what the teacher is saying. This also opens up possibilities for students who might not be able to attend school due to illness or who require learning at a different level or on a particular subject that isn’t available in their own school. AI can help break down silos between schools and between traditional grade levels.   

Automate Assessment Tasks     

An educator spends a tremendous amount of time grading homework and tests. AI can step in and make quick work out of these tasks while at the same time offering recommendations for how to close the gaps in learning. Although machines can already grade multiple-choice tests, they are very close to being able to assess written responses as well. As AI steps in to automate admin tasks, it opens up more time for teachers to spend with each student. 

Tutoring and support outside the classroom       

Ask any parent who has struggled to help their teenager with algebra, and they will be very excited about the potential of AI to support their children when they are struggling at home with homework or test preparations. Tutoring and studying programs are becoming more advanced thanks to artificial intelligence, and soon they will be more available and able to respond to a range of learning styles.    

There are many more AI applications for education that are being developed including AI mentors for learners, further development of smart content and a new method of personal development for educators through virtual global conferences. Education might be a bit slower to the adoption of artificial intelligence and machine learning, but the changes are beginning and will continue.