Menu
Log in
Log in

Tech News Blog

Connect with TECH NEWS to discover emerging trends, the latest IT news and events, and enjoy concrete examples of why Technology First is the best connected IT community in the region.

Subscribe to our newsletter

<< First  < Prev   1   2   3   4   5   ...   Next >  Last >> 
  • 01/26/2026 3:49 PM | Marla Halley (Administrator)

    As organizations rapidly move applications and data to cloud platforms, cloud identity providers have replaced the network perimeter as the primary security boundary. Compromising a single account can provide broad access, making identity one of the highest-value targets for attackers.

    Multi-factor authentication (MFA) was once the most effective defense against account takeover. Today, it remains necessary—but it is no longer sufficient without additional steps.

    What MFA Was Built to Prevent

    Traditional phishing attacks focused on stealing credentials. Users were tricked into entering a username and password into a fake website, which attackers then reused to log in to the real service.

    MFA disrupted this model. Even with stolen credentials, attackers could not complete authentication without access to the second factor. For years, this significantly reduced phishing-related compromises.

    That protection assumed attackers were outside the authentication flow. Modern attacks no longer operate under that assumption.

    How Adversary-in-the-Middle Attacks Bypass MFA

    Adversary-in-the-Middle (AiTM) phishing shifts the attack from credential theft to session theft.

    Instead of sending users to a fake login page, attackers proxy the real sign-in experience. The victim authenticates to the legitimate service and completes MFA normally. Behind the scenes, the attacker relays all traffic and captures the resulting session token.

    Session tokens prove that authentication has already occurred. Once issued, they allow access without requiring the password or MFA again. If an attacker steals the token, MFA is effectively bypassed.

    A Typical AiTM Attack Flow

    1. The user receives a phishing email designed to create urgency.
    2. Clicking the link routes the user through attacker-controlled infrastructure.
    3. The attacker proxies the real login service.
    4. The user enters credentials and completes MFA.
    5. The identity provider issues a session token.
    6. The attacker captures and replays the token to access the account.

    From the identity provider’s perspective, the attacker’s session is valid. Authentication already succeeded.

    Why Traditional MFA Falls Short

    Most MFA methods—SMS codes, authenticator apps, and push approvals—can be relayed in real time. AiTM attacks exploit this by forwarding challenges and responses between the victim and the real service.

    Because the session token is issued after MFA is completed, MFA alone does not prevent token theft or reuse. Defending against AiTM requires controls that either prevent token capture or limit token usability.

    Controls That Actually Reduce Risk

    Phish-Resistant Authentication

    FIDO2 security keys, passkeys, and certificate-based authentication are resistant to relay attacks. These methods cryptographically bind authentication to the legitimate service and cannot be replayed through a proxy.

    Device-Based Access Controls
    Requiring trusted devices adds a second enforcement layer. During the login process, the identity provider does an additional check to validate it is a trusted device and not an attacker’s proxy server.

    Session Token Protection
    Short session lifetimes, token binding, and continuous access evaluation reduce the value of stolen tokens and limit attacker dwell time.

    Continuous Detection
    Identity Threat Detection and Response (ITDR) tools identify anomalous behavior such as unfamiliar devices or impossible travel, enabling rapid containment when prevention fails.

    Conclusion

    MFA is no longer a complete defense against modern identity attacks. Adversary-in-the-Middle demonstrates that attackers can bypass authentication by stealing sessions instead of credentials.

    Effective identity security requires layered controls that reflect how attacks occur: phish-resistant authentication, device trust, hardened sessions, and continuous monitoring.

    Identity is now the perimeter. Defending it requires more than a second factor.

    About the Author

    Chaim Black is a Cyber Security Manager at Intrust IT. He is focused on delivering resilient security operations. He leads day-to-day security team execution while strengthening internal security posture and compliance. Chaim also serves as President of InfraGard Cincinnati, part of the FBI-private sector partnership advancing information sharing and cyber risk awareness.

  • 01/26/2026 3:16 PM | Marla Halley (Administrator)

    The shift in offensive operations over the last 18 months is unlike anything the industry has seen before. AI isn't coming for defenders, it's already here. And to make things worse, attackers are using it to outpace traditional security controls at a rate that should concern everyone.

    Here's the reality: signature-based detection was always playing catch-up. It works by recognizing things that have already been seen; file hashes, known-bad strings, IOCs pulled from last month's incident. That model assumes attackers are reusing tools and infrastructure. They're not. Not anymore.

    Polymorphism at Scale

    Polymorphic malware isn't new. What's new is how trivially easy AI makes it to generate variants. A red team operator can take a loader, feed it through an LLM-assisted obfuscation pipeline, and produce hundreds of unique builds that share zero static indicators. Different hashes, different string tables, different control flow. Same capability.

    From an offensive perspective, this changes engagement dynamics completely. Payload development and evasion used to consume significant amounts of time. Now, generating AV-bypassing variants is almost a commodity task. If authorized red teams can do it with limited resources, assume actual threat actors, with more time, more money, and no rules of engagement, are doing it better.

    The tooling exists to test payloads against defender solutions in automated loops. Spin up a sandbox, drop the payload, check detection, mutate, repeat. Iterate until clean. That's not theoretical, it's how modern offensive tooling development works.

    Why Behavioral Detection Has to Be the Focus

    If static indicators are unreliable, what's left? Behavior.

    Malware can change its code, but it still must do something. It needs to establish persistence, move laterally, touch credentials, call home. Those actions leave traces that are harder to obfuscate than a file hash.

    Competent defenders should be watching for:

    • Process lineage that doesn't make sense (Word spawning PowerShell spawning cmd.exe)
    • Authentication patterns that deviate from baseline (service accounts logging in interactively, lateral movement spikes)
    • Memory behaviors associated with injection techniques
    • Network traffic that violates expected protocol norms

    Good detection engineering focuses on these patterns, not on "did we see this exact hash before." The best blue teams aren't hunting for tools, they're hunting for tradecraft.

    IOCs Need to Get Smarter

    Most IOC feeds are noise. A hash gets burned within hours. A C2 domain is useful until the next rotation. If a detection strategy depends on someone else seeing the attack first and publishing indicators, it's always behind.

    The IOCs worth investing in are behavioral: specific API call sequences, registry key patterns associated with persistence mechanisms, authentication anomalies, protocol misuse. These tie to what the attacker is trying to accomplish, not what tool they happen to be using today. That's the important distinction.

    Anyone building custom offensive tooling knows that changing source code is easy. Changing objectives is not. Credential access is still required. Lateral movement is still required. Exfiltration is still required. Detect those actions, and the operator gets caught regardless of what the payload looks like.

    AI Works Both Ways

    Defenders have access to the same technology. Machine learning models that baseline normal environment behavior and flag deviations are genuinely useful when tuned properly and fed good telemetry. The challenge is operationalizing them without drowning in false positives.

    The environments that cause the most problems during offensive engagements are the ones with mature detection engineering programs. They're correlating endpoint telemetry with identity logs and network traffic in near real-time. They're running adversary simulations that mirror actual attacker behavior, not checkbox compliance exercises. They're hunting proactively instead of waiting for alerts.

    The Uncomfortable Truth

    Prevention won't stop every breach. That's not defeatism, it's operational reality. Attackers only need to be right once. Defenders need to be right constantly.

    The goal isn't perfection. The goal is making attacker operations expensive, noisy, and slow enough that detection happens before objectives are achieved. That means investing in detection engineering, building response capabilities that actually work under pressure, and accepting that security stacks will fail at some point.

    AI is making attacks cheaper and faster to produce. The response isn't more signatures; it's better detection of the behaviors that signatures can't catch.

    Author:

    Anthony Cihan is the Senior Principal Cybersecurity Engineer at Obviam where he leads offensive security operations and security assessments. He holds a BS is Cybersecurity and Information Assurance, the OSCP and OSWP, and has published multiple offensive security tools such as the PiSquirrel wiretap/implant and the Spellbinder SLAAC based IPv6 attack tool.


  • 12/23/2025 10:24 AM | Marla Halley (Administrator)

    The technology landscape is shifting at an unprecedented pace, driven primarily by the rapid maturity of Artificial Intelligence. For tech leaders—CIOs, CTOs, and CISOs—2026 isn't just another year; it's a pivotal moment to move from experimentation to enterprise-grade execution. Success will be defined not by the technology you adopt, but by how strategically and responsibly you embed it at the core of your business.

    Here are my top five priorities that will define the winners in 2026 and beyond.

    1. Establish Comprehensive AI Governance and Ethics

    AI is no longer a fringe tool; it's becoming the operational fabric of the enterprise. This widespread adoption, especially of Generative AI and autonomous agents, elevates the need for robust governance.

    Leaders must prioritize building a comprehensive AI governance framework that moves from policy to operation. This framework is essential for managing risk, ensuring compliance, and building customer trust. Key actions include:

    • Define Responsible Use: Implement clear, regularly updated policies for how employees can and cannot use AI tools, with a focus on data privacy and intellectual property.
    • Ensure Data Provenance: As AI models rely on vast datasets, establishing digital provenance, proving that your data and AI outputs are genuine, traceable, and compliant is critical.
    • Build-in Transparency: Design AI agents that can document and explain their decisions, allowing for essential "human-in-the-loop" review and accountability, especially in high-risk applications like hiring or customer service. 

    2. Modernize Infrastructure for an AI-Native Future

    The existing IT infrastructure, often burdened by years of technical debt, cannot support the demands of AI on a scale. AI models require massive compute power, high-speed data pipelines, and a flexible, low-latency environment.

    A core priority for 2026 must be the modernization of systems and the transition to an AI-native platform. This means:

    • Cloud Foundation: Doubling down on a full-stack, cloud-first approach that provides the necessary scalability, agility, and specialized AI supercomputing platforms.
    • Data Readiness: Creating a robust "data factory" with strong data governance to ensure the quality, security, and interoperability of the data that feeds your AI models.
    • Edge Computing: Leveraging edge computing capabilities, often via IoT, to process AI-driven data closer to where it's generated (e.g., manufacturing floors, smart cities) for real-time decision-making.

    3. Elevate Cybersecurity to Preemptive Resilience

    With AI-powered attacks becoming faster and more sophisticated, standard perimeter defense is insufficient. Cybersecurity is no longer an IT operational task; it's a board-level risk concern.

    Tech leaders must shift their focus to preemptive cybersecurity and a culture of resilience:

    • Zero-Trust Security: Fully implementing a zero-trust model across the organization, which assumes no user or device is trusted by default, minimizing the risk of internal breaches.
    • AI-Driven Defense: Utilizing AI security platforms for proactive threat detection, anomaly scoring, and automated incident response to combat AI-enhanced reconnaissance and supply-chain attacks.
    • Upskill Every Employee: Cybersecurity remains a human problem. Prioritize company-wide, continuous training that focuses on phishing, identity management, and the risks associated with deepfakes and synthetic content.

    4. Redesign the Workforce for Human-AI Collaboration

    The conversation around AI is shifting from job displacement to workforce transformation. Successful leaders will recognize that the competitive advantage lies in creating human-AI hybrid teams.

    The priority here is to cultivate the human skills that AI cannot replace and redefine roles for the new era:

    • Reskilling and Upskilling: Make continuous learning a strategic imperative, training employees in data fluency, AI implementation, and prompt engineering. The most valuable professionals will blend technical AI fluency with critical human skills like creativity, emotional intelligence, and long-term strategic thinking.
    • New Roles and Career Paths: Establish new career pathways for roles that manage, monitor, and design AI systems, such as AI Ethics Officers and Agent Orchestrators.
    • Focus on Human Judgment: Use AI to eliminate mundane tasks, freeing human workers to focus on high-value activities that require complex judgment, empathy, and strategic decision-making.

    5. Drive the Shift to Composable and Adaptable Architectures

    In a world defined by rapid change and intense competition, the traditional, monolithic application structure is a liability. Large, interconnected systems are slow to update, difficult to integrate with emerging AI capabilities, and prevent the business from responding quickly to market demands.

    Tech leaders must make the strategic priority a shift toward a composable enterprise built on modular, adaptable systems. This approach emphasizes flexibility, speed, and reuse:

    • Adopt Modular Architectures: Prioritize the full transition to microservices, containerization, and API-first design. This allows developers to quickly assemble and disassemble business capabilities (e.g., payment processing, customer login) as market conditions or new AI tools require.
    • Invest in Integration Fabric: Deploy a modern, robust integration layer (like an event mesh or sophisticated API gateway) that allows data and services to flow seamlessly between core legacy systems, cloud-native applications, and third-party vendor platforms. This is the glue that enables true agility.
    • Empower Fusion Teams: Move away from siloed IT and business units. Establish cross-functional "fusion teams" that blend business experts with low-code/no-code developers. These teams can rapidly assemble existing components to create tailored applications without waiting for lengthy, centralized IT development cycles.

    About the Author

    Parag Pujari is the Chief Information Officer (CIO) of Jurgensen Companies, where he oversees all technologies, IT strategy, IT operations and drives digital transformation initiatives to enhance business performance and efficiency. Parag has a distinguished background in IT leadership, specializing in areas such as cloud computing, ERP, cybersecurity, and enterprise architecture. Parag is crucial in aligning Jurgensen Companies’ technological capabilities with its long-term strategic business goals.

  • 12/23/2025 10:04 AM | Marla Halley (Administrator)

    I hire technology leaders for a living, so I do a lot of interviews. There are some attitudes that consistently raise red flags. These are behaviors to be cautious about in professional collaborations of any sort. Watch out for them when vetting vendors, negotiating partnerships, or considering a new job. And most importantly, avoid these behaviors yourself.

    Speaking poorly of former colleagues, partners, or customers

    This is perhaps the most common and damaging one I encounter. When a candidate casually disparages a previous boss as "incompetent" or a former team as "lazy," it immediately sets off alarms. First, anyone who gossips will gossip about you. If they're willing to breach confidentiality or loyalty with past relationships, what's to stop them from doing the same when they move on from your organization?

    Trust is foundational in tech leadership—sharing sensitive strategies, handling team dynamics, or collaborating on high-stakes projects all require discretion. A level of confidentiality is assumed. We need to feel safe to be imperfect. In innovative environments, mistakes happen as part of experimentation. Badmouthing absent parties erodes psychological safety; it signals that errors will be weaponized rather than learned from.

    Nobody can see the whole picture. Deference to the unknown is a sign of maturity. Perhaps the former colleague had unseen constraints—resource limitations, personal challenges, or higher-level directives. Mature professionals show humility by withholding judgment, opting instead for curiosity: "I wondered if there might have been factors I wasn't aware of."

    Blaming others for failures

    Flag number two is blaming. Externalizing failures when addressing a setback lacks nuance and appreciation of complexity. Tech ecosystems are intricate delays often stem from interdependent factors like ambiguous requirements, shifting priorities, or uncontrollable dependencies. Leaders who oversimplify by pointing fingers miss the systemic view needed for effective problem-solving.

    Accountability is non-negotiable in leadership. Owning outcomes, even when not directly at fault, demonstrates integrity. Blamers often avoid reflection: "What could I have done differently to mitigate this?" Blame reveals a missed growth opportunity. Those who blame others stagnate, while reflective leaders evolve.

    Casting a lost promotion or reduced scope as a betrayal

    This last one is a little more obscure, but I still hear it regularly. It emerges when discussing reasons for leaving a role. Candidates will frame being pushed to a smaller team, budget cuts, or shifted responsibilities as personal victimization—"They promised me X and then pulled the rug out."

    This kind of attitude reveals entitlement over adaptability. In dynamic tech landscapes, scopes evolve due to market shifts, funding rounds, or pivots. Resilient leaders view these as realities to navigate, not betrayals to resent. Even if it does hurt to be trusted with less responsibility, taking it as an attack reflects poor emotional regulation. Reacting with bitterness suggests difficulty handling ambiguity or disappointment gracefully—qualities essential for leading through uncertainty.

    A victim mindset fosters resentment, reducing willingness to invest in the team's success when conditions aren't ideal.

    These red flags aren't about perfection—no one has a flawless history. They're about patterns of immaturity: low self-control, ego-driven responses, and combativeness over curiosity. In contrast, professionals who earn trust speak with goodwill, own their part, appreciate complexity, and adapt without grievance.

    As you build your network—whether hiring, partnering, or job-seeking—pay attention to these signals. They imply how someone handles conflict, uncertainty, and relationships. And that awareness cuts both ways: self-reflect to avoid exhibiting them yourself. Practice pausing before critiquing absent parties; frame past experiences with ownership and nuance; view changes as opportunities rather than injustices.

    In leadership, especially technology, trust compounds success. Spot these flags early, in yourself and others, and steer toward collaborations that build it rather than erode it.

    About the Author:

    Aaron Davis is a seasoned leader and talent acquisition expert. With a career spanning over two decades, Aaron has built and led successful teams across various industries, including tech staffing, software development, healthcare, & real estate investment. He founded Reliant Search Group in 2019 and still enjoys connecting business leaders with critical talent. Aaron hosts the "Being Built" podcast, where he shares insights on business growth and leadership.

  • 11/28/2025 8:32 AM | Marla Halley (Administrator)

    Cybersecurity in 2026 is at the center of digital transformation. AI-driven threats, expanding attack surfaces, and global regulatory shifts are rewriting the rules of risk management. Leaders who understand these dynamics will shape organizations that thrive in a world where security is inseparable from innovation. These five trends highlight the changes shaping cybersecurity and why acting today sets the stage for long-term growth.

    1. AI: The Double-Edged Sword

    Artificial intelligence has become a pivotal force in both offensive and defensive cybersecurity operations. Threat actors are increasingly leveraging generative AI to craft highly convincing phishing campaigns and other social engineering attacks at scale. According to SentinelOne’s 2025 report, phishing attacks surged by 1,265% year-over-year, largely driven by the adoption of GenAI in attack workflows. In response, defensive AI systems are employing behavioral analytics and predictive modeling to detect anomalies and mitigate threats in real time, aiming to counter the growing sophistication and volume of AI-enabled attacks.

    The implications extend far beyond phishing. Gartner predicts that by 2027, AI agents will reduce the time it takes to exploit account exposures by 50%, dramatically increasing the speed and scale of credential theft and account takeover attacks. This trend highlights a critical shift toward automation in cybercrime, forcing organizations to rethink response strategies and invest in adaptive security models that can keep pace with evolving threats. Organizations that fail to anticipate this shift risk facing attacks that surpass traditional defenses, leaving critical systems exposed in a matter of minutes.

    2. The Rise of Zero Trust Architecture

    Zero Trust Architecture (ZTA) has transitioned from conceptual to operational, now embedded across critical sectors like finance, healthcare, and government. It mandates verification of every access request, independent of origin or device. Micro segmentation and continuous authentication are considered foundational practices. Gartner predicts that by 2026, 10% of large enterprises will have a mature and measurable Zero Trust program in place. This trend highlights the growing focus on building resilient security frameworks to counter evolving cyber threats.

    3. Rising Risks in Operational Technology

    The rapid expansion of connected Operational Technology (OT) devices is introducing new vulnerabilities across enterprise and industrial environments. These systems, which control critical processes, are increasingly interconnected, making them attractive targets for cyberattacks. To reduce risk and maintain operational continuity, security teams are prioritizing measures such as firmware integrity checks and network segmentation.

    Large-scale environments like smart cities and industrial systems face heightened exposure because of the sheer number and diversity of connected devices. According to IBM’s Cost of a Data Breach Report, the impact is significant: in 2025, 15% of organizations experienced OT-related breaches, and nearly a quarter of those incidents caused direct damage to OT systems or equipment, with an average cost of $4.56 million per breach.

    This expanding attack surface demands a shift toward asset-centric security models and real-time monitoring to prevent lateral movement and supply chain compromise.

    4. Endpoint Detection and Response: The Frontline of Cyber Defense

    In many cases, endpoints serve as the most accessible target for attackers. In a world of hybrid work and distributed networks, attackers often target laptops, mobile devices, and other endpoints as their primary entry point. Traditional antivirus tools, designed to detect known signatures, cannot keep up with advanced threats such as fileless malware, credential theft, and AI-driven exploits.

    EDR takes a proactive approach by continuously collecting and analyzing data from every endpoint on the network, including processes, performance metrics, network connections, and user behaviors. By storing this data in a centralized cloud-based system, EDR enables security teams to identify anomalies quickly and respond before attackers can move deeper into the network. When a threat is detected, EDR can immediately isolate the compromised device, preventing further spread and minimizing impact. IBM research shows that 90 percent of cyberattacks and 70 percent of breaches originate at endpoint devices, making robust monitoring and response capabilities a top priority. Organizations that rely solely on traditional antivirus remain vulnerable to modern attack techniques. To maintain resilience and respond quickly to threats, EDR should be a core component of every security strategy.

    5. Preparing for the Quantum Era

    Post-Quantum Cryptography (PQC) introduces cryptographic algorithms designed to withstand the computational power of quantum computers, which threaten to break traditional encryption methods like RSA and ECC. Instead of relying on current mathematical problems vulnerable to quantum attacks, PQC uses lattice-based, hash-based, and multivariate polynomial schemes that remain secure even in a quantum-driven world.

    The urgency for PQC adoption is growing as organizations recognize the long-term risk of “harvest now, decrypt later” attacks. Sensitive data encrypted today could be compromised in the future when quantum computing becomes mainstream. Gartner predicts that by 2029, advances in quantum computing will render applications, data, and networks protected by asymmetric cryptography unsafe, and by 2034, these methods will be fully breakable. Similarly, a Forbes Technology Council report highlights that quantum computing is now considered a top emerging cybersecurity threat, prompting U.S. policymakers to push for immediate preparation across both government and industry.

    PQC allows organizations to strengthen their encryption for the future while maintaining efficiency and compatibility with existing systems. By integrating quantum-safe algorithms into existing systems, businesses can maintain compliance, secure cloud environments, and protect IoT ecosystems against next-generation threats. This shift transforms cryptography from a static safeguard into a resilient, adaptive defense for the quantum era.

    Conclusion

    Cybersecurity in 2026 is about staying ahead of threats before they emerge. AI-powered defenses, Zero Trust principles, and quantum-resistant cryptography are becoming standard practices for organizations that want to remain resilient. The companies that treat security as a core business strategy will be best positioned to protect assets, uphold compliance, and foster sustainable growth.

    Strengthen Your Cybersecurity Strategy

    At The Greentree Group, we help organizations protect critical data with comprehensive cybersecurity solutions. We work with federal, state, local, and commercial clients to identify threats, prevent vulnerabilities, and strengthen system security. Contact us today to take a proactive step toward securing your business.

    About The Author:

    Mackenzie Cole is an Analyst at The Greentree Group and a proud Wright State University alum, specializing in marketing strategy and analytics. With a passion for turning insights into impactful campaigns, Mackenzie has worked on a variety of multi-channel marketing initiatives with a focus on technology, creative storytelling, and connecting with local communities through purpose-driven marketing.

  • 10/16/2025 2:08 PM | Marla Halley (Administrator)


    As cybersecurity threats grow more sophisticated, the U.S. Department of Defense (DoD) has taken decisive action to protect sensitive data across its supply chain. The Cybersecurity Maturity Model Certification (CMMC) is now embedded. For organizations in the Defense Industrial Base (DIB), this is not just a regulatory shift—it’s a strategic imperative.

    Why CMMC Matters

    CMMC is a tiered certification framework designed to safeguard Controlled Unclassified Information (CUI) and Federal Contract Information (FCI). Whether you're a prime contractor or a subcontractor, if you handle either type of data, you must comply.

    The program includes three assessment levels:

    Level 1: Annual self-assessment for FCI.
    Level 2: Self or third-party assessment for CUI.
    Level 3: Government-led assessment for highly sensitive CUI.

    Why Compliance Is Urgent

    The final CMMC rule (32 CFR Part 170) took effect December 16, 2024, and the acquisition rule (48 CFR Part 204) becomes enforceable November 10, 2025. Non-compliance can result in:

    • Disqualification from DoD contracts.
    • Legal risks under the False Claims Act.
    • Reputational damage.

    Contractors must affirm continuous compliance in the Supplier Performance Risk System (SPRS), and all requirements flow down to subcontractors.

    Building Your Compliance Roadmap

    Achieving CMMC compliance is a journey and is not a point-in-time process. Breaking this workload down into actionable steps is critical to maintaining focus. Here’s a phased approach:

    1. Understand the Framework:

    • Familiarize yourself with CMMC’s structure, domains, and practices. Map requirements to NIST SP 800-171 controls, and clarify whether your organization handles FCI, CUI, or both.
    • Another critical element is to review cloud providers and other connected systems begin to identify shared responsibilities through a Share Responsibility / Customer Responsibility Matrix.

    2. Readiness Assessment:

    • Determine your required CMMC level. This can be done through a review of your current contracts or through a conversation with your contract officer.
    • Review your current policies, procedures, and technical configurations. Documentation is key in achieving and maintaining CMMC compliance.
    • Conduct a gap analysis to identify areas needing improvement. Engaging with professionals who can provide guidance and expertise is crucial to help identify true gaps and to align business processes

    3. Planning & Resourcing:

    • Develop a Plan of Action & Milestones (POA&M) to address gaps. This should be done at the objective level. This should also include prioritizing and budgeting for remediation.
    • Assign clear roles, define workflows, and identify necessary technology. Having a project manager or subject matter expert assigned to your compliance journey is essential.
    • Engage with certified experts and ensure internal ownership of compliance. The implementation of controls and objectives can be confusing. Having an expert that can give you advice and solutions will ensure that your interpretation of how you are meeting the controls does not cause you issue when it comes to an official assessment.

    4. Implementation:

    •  Update policies and procedures. Documentation is key in achieving compliance. Having clearly documented policies and procedures that address specific controls is necessary. Engaging with policy experts to ensure solid documentation is highly recommended.
    • “Document what you do, do what you document”
    • Enforce access controls. A key component of CMMC compliance is ensuring that only authorized users have access to the system and, furthermore, have access to CUI.
    • Deploy technical safeguards like encryption, a SIEM, MFA and endpoint protection.
    • Establish incident response and change control processes. Make sure that these processes are followed and that there is an audit trail so that the assessor can be provided with evidence.

    5. Continuous Monitoring:

    • Treat compliance as an ongoing effort. This includes documenting reviews, auditing processes, defining audit logs and audit review processes, and constantly ensuring that documentation is in line with implementation.
    • Use tools like SIEM and other alerting mechanisms to assist with audits of controls and objectives.  
    • Keep your POA&M updated as risks to your environment and compliance posture evolve.
    • Avoid superficial compliance and conduct mock assessments to uncover gaps.

    Preparing for the Assessment

    • Don’t just check boxes—tell a defensible story. Your System Security Plan (SSP), POA&M, and supporting documentation should clearly demonstrate how controls and objectives are implemented and enforced.
    • Use real-world examples to show how controls are implemented. Be prepared to guide the assessor through your implementation and compliance.
    • Conduct mock assessments to uncover gaps before the official evaluation. It is always a good to check with designated experts to be sure you are in alignment. Contracting with a C3PAO (Certified 3rd Party Assessment Organization) to conduct a mock assessment before your official assessment will allow for you to correct any known deficiencies before they are officially recorded.
    • Embed compliance into daily operations through automation and regular staff training. CMMC compliance is a culture shift for the entire organization.

    Real-World Lessons

    A case study from ProStratus highlights the value of a structured approach:

    • Conducting a thorough gap analysis and building a tailored POA&M.
    • Embedding compliance into daily operations and culture.
    • Ensuring that documented policies and procedures are clear, outline “actual” implementations and used throughout the organization.
    • Go into the assessment being able to prove all 110 controls and 320 objectives. You should not go into the assessment with a POAM.

    Common Pitfalls

    • Over-reliance on generic templates
    • Neglecting documentation
    • Lack of internal ownership
    • Treating compliance as a one-time project
    • Trying to complete this journey alone.

    Success Factors

    • Leadership buy-in. A C-Level champion is absolutely necessary for success.
    • Clear documentation that identifies addressed controls and objectives.
    • Proactive security culture that addresses ALL employees and avoids siloing security and compliance to a “team.”
    • Treating compliance as a strategic advantage. The amount of time and energy that is necessary for achieving CMMC Level 2 is enormous, but this is also an opportunity to set your organization apart from competitors and assure primes and officiating bodies that you are serious about protecting sensitive data.

    Bottom Line:
    CMMC compliance is not just a regulatory hurdle—it’s an opportunity to strengthen your organization’s security posture and stand out in the defense contracting space. Start early, build a culture of compliance, and leverage expert guidance to ensure success.

    ###

    About the Author

    ProStratus is a CMMC Level 2 certified managed security service provider, delivering secure IT solutions across the Defense Industrial Base. Thomas Saul is the Director of Security and Compliance for ProStratus and is a Certified CMMC Assessor (CCA) who specializes in helping organizations operationalize compliance and building cybersecurity into daily operations.

  • 10/16/2025 2:03 PM | Marla Halley (Administrator)

    Navigating the Shift to Smarter, Self-Running Experiences

    Customer experience, or CX, is the sum of every interaction a customer has with your brand, from the first app notification to the final thank-you email. It's not confined to call centers; it's the seamless thread weaving through every touchpoint in a business, shaping loyalty in an era where expectations soar. Imagine a world where these experiences don't just react to needs. They anticipate them, resolve hiccups before they arise, and evolve effortlessly without endless human intervention. That is the autonomous CX landscape on the horizon, where AI doesn't replace people but amplifies them, touching every corner of operations in retail, finance, healthcare, education, and beyond. As industries race toward this future, four foundational pillars—strategic vision, quality assurance, training rigor, and mechanical integration—stand out as the blueprint for success. This is not just theory. It is an unfolding story of transformation, from today's reactive support to tomorrow's predictive powerhouses across all customer-facing channels. Let's dive in, exploring how these pillars build resilient, engaging journeys that keep your audience hooked and your operations ahead.

    Pillar 1: Strategic Vision, Charting the Course Beyond the Budget

    Every great shift starts with a map. In the rush to AI, too many leaders fixate on tools and costs, missing the bigger picture: Where is your CX headed in an autonomous era? Strategic vision demands asking bold questions. How will AI evolve your client interactions across apps, in-store visits, and virtual consultations? What seamless experiences will set you apart by 2030?

    Picture customer hubs evolving from fragmented silos into dynamic ecosystems, where normalized, profiled information fuels multi-agent systems. Front-line AI handles routine queries in chatbots or kiosks, escalating to specialized "supervisors" that tap deeper insights, handing off to humans only when nuance calls for it. This is not about slashing expenses. It is about directional transformation, prioritizing long-term client loyalty over short-term wins in every business domain. Without this north star, deployments falter into chaos. Engage your teams by co-creating these roadmaps. Start with workshops that paint vivid "day-in-the-life" scenarios, turning abstract strategy into tangible excitement for non-technical staff and data-driven insights for technical experts.

    Pillar 2: Quality Assurance, The Glue Holding It All Together

    In an autonomous world, consistency is not optional. It is the heartbeat of trust. Quality assurance ensures every AI interaction feels polished, reliable, and human-touched, even when it is not, whether in a drive-thru order or a personalized email campaign. Think real-time coaching: Scripts, prompts, and oversight that mirror elite outsourcing teams, grading interactions on sentiment, flow, and resolution.

    Envision transcribing 100 percent of interactions to forge knowledge repositories, not just for compliance, but to train behaviors that delight across channels. In high-stakes sectors like finance or education, this pillar prevents drift. Tools for consistent reporting flag anomalies early, like a customer's frustration spiking mid-conversation on a mobile app. The payoff? Frictionless experiences that boost retention business-wide. To keep readers riveted, frame quality as a narrative hero. Share anonymized "before-and-after" stories in your internal comms, showing how one overlooked metric turned a complaint cascade into rave reviews, resonating with CXOs eyeing ROI and frontline teams craving simplicity.

    Pillar 3: Training Rigor, Building AI That Learns Like Humans

    Autonomy thrives on adaptability, and that is where training rigor shines. Gone are static models. Enter AI that ingests personas—style guides and prompts tailored for customer-facing finesse—while undergoing relentless coaching cycles. It is like raising a digital apprentice: Start with zero knowledge base, feed it transcribed dialogues, regional dialects (hello, Southern US inflections), and iterative feedback to refine accuracy in emails, chats, or voice assistants.

    This pillar powers the story's turning point: From clunky bots to intuitive agents that personalize on the fly, like suggesting "your usual sausage biscuit" based on geolocation and past orders during an in-app upsell. For resource-strapped teams, like nonprofits dodging DIY pitfalls, lean on accessible platforms for workflow training and agent licenses, bypassing unguided tools that promise quick fixes but deliver frustration. Make training engaging by gamifying it. Leaderboards for "best anomaly hunts" (spotting order errors via license plates) turn compliance into collaboration, preparing workforces for a job market craving self-motivated learners over rote specialists—appealing to technical builders and visionary leaders alike.

    Pillar 4: Mechanical Integration, The Engines of Seamless Automation

    No autonomous tale is complete without the machinery that makes it hum. Mechanical integration weaves robotics and edge tech into the fabric of CX, handling the grunt work so humans focus on magic, from warehouse fulfillment to personalized retail recommendations. Dual cameras spotting menu items with yes/no precision? Edge-localized machine learning slashing voice latency to milliseconds? Headset analytics canceling noise while monitoring volume trends? These are not gadgets. They are the plot devices propelling us forward.

    From burger-flipping arms streamlining prep to shelf-scanning bots enforcing planograms with image-code smarts, this pillar scales repetition into reliability across supply chains and service desks. Autonomous prototypes in quick-service spots run end-to-end robotic ops, while manufacturing cameras enforce glove checks for safety. Early costs are low, but watch for upticks as efficiencies compound, like cloud trends on steroids. Hook your audience with demos. Virtual tours of edge-powered point-of-sale systems surviving outages prove how mechanical muscle delivers outage-proof speed and sparks innovation across retail, healthcare tele-health, or beyond, bridging the gap for non-technical users with visuals and CXOs with scalability metrics.

    Weaving the Pillars into Your Autonomous Story

    These four pillars are not silos. They interlock to narrate a compelling arc: From data chaos to predictive bliss, reactive fixes to proactive delight in every customer touchpoint. High-level steps to get started? First, audit your data for strategic alignment. Second, pilot quality-focused transcriptions in one channel. Third, roll out persona training with regional tweaks. Fourth, integrate mechanical pilots for latency-sensitive tasks. Fifth, cycle through refinements, benchmarking against 5-year adoption curves.

    The risks? Deepfakes from mere minutes of media, fraud via unchecked access, or cost swings from unchecked scaling. Counter with multi-factor authentication, anomaly detection, and vigilant oversight. The reward? Unified channels yielding hyper-personalized, resilient CX that captivates customers and empowers teams, positioning every forward-thinking business for enduring success.

    As we edge toward this autonomous horizon, the question is not if, but how boldly you will lead the change. Dive deeper into these pillars to craft your organization's next chapter one where CX isn't a department, but the defining edge of your entire enterprise.

    ####

    About The Author

    Bill Magnuson is a seasoned leader in technology transformation, with a strong background in driving innovation, strategic growth, and operational excellence. He combines business acumen with tech expertise to help organizations modernize, scale sustainably, and deliver greater value to customers.

  • 09/23/2025 2:16 PM | Marla Halley (Administrator)

    • Organizations tend to think that if they deploy EDR (Endpoint Detection and Response) solutions on their workstations, they are “safe” from malware. While EDR is a powerful tool in detecting and responding to threats, it’s only one piece of a much larger cybersecurity puzzle.

      True Cybersecurity isn’t just about technology—it’s about governance, process, and accountability. Compliance frameworks like NIST, HIPAA, PCI and GDPR aren’t just bureaucratic checkboxes; they provide structured approaches to managing risk, protecting data, and ensuring resilience. Even your basic Cyber Insurance policy requires your thoughtful responses to Self-Assessment Applications and proof of compliance. Risk management, meanwhile, helps organizations identify vulnerabilities beyond the technical layer—such as third-party risks, insider threats, and operational weaknesses.

      Without a strong compliance and risk management foundation, even the best technical defenses can fall short. Cybersecurity must be holistic, integrating people, processes, and technology. Organizations that treat compliance and risk management as core components of their security strategy are better positioned to prevent breaches, respond effectively, and maintain trust. 

      Why are we so concerned about Cybersecurity?

      We all hear the headlines about data breaches and the pain they cause in terms of lost privacy, lost revenue while systems are recovered, and expensive recovery costs.  Look at these recent statistics, and just think of the recent major breach in our own backyard with Kettering Health Network:

      • It takes organizations an average of 204 days to IDENTIFY a data breach and 73 days to CONTAIN it” (Bonnie). In the case of Kettering Health Network, the breach may have gone undetected for up to six weeks (Bruce), and back to full operation in 3 weeks (Alder).
      • 74% of all breaches include the human element” (Bonnie).
      • 12% of employees took sensitive IP with them when they left an organization, including customer data, employee data, health records, and sales contracts” (Bonnie).
    • The reality in today’s environment is that email-based “Business Email Compromise” (BEC), or “Phishing” now causes 36% of Cybersecurity breaches (Spys). These types of compromises are aimed at getting a user to divulge the username and password for a critical resource like their email. In many environments that depend on a cloud-based infrastructure like Microsoft 365 (or Google Workspace among others), gaining access to your email also gives access to OneDrive and Sharepoint data the user has access to.  Premises-based systems with on-site servers are not immune to compromise either. Attackers target these systems with downloaded documents or programs designed to deceive users into opening or executing them.

      Note above that “74% of data breaches involve the human element.” Thus, we need to protect the resources that users have access to and train them how to detect and respond to these compromise attempts.

      So what’s the right path?

      As an MSP, we recommend a layered approach to security and compliance for overall risk management. Even the way cloud resources such as Microsoft 365 are implemented is important to the overall security of an organization.

      Before moving into advanced Compliance and Risk-Management solutions, it’s important to first review the workstation and server basics that serve as the foundation for enhanced security, compliance, and risk management.

      Workstation (Endpoint) Basics:

      Microsoft 365 Premium or equivalent accounts for advanced security and compliance features such as Microsoft Defender, Purview, Azure Active Directory and Intune.

      Patch Management – MSP Management provides additional oversight into Patch Management to better control the patch process and allow oversight and additional approval for those occasional times when Microsoft releases patches with unexpected side-effects.

      Endpoint Detection and Response (EDR) -- continuously monitors endpoints for evidence of threats and performs automatic actions to help mitigate them. Do note that EDR is only monitoring the endpoints themselves.

      Backup for Microsoft 365 Email, OneDrive and SharePoint. By default, Microsoft provides no “backup” of your Microsoft 365 data (email, SharePoint and OneDrive) -- only a guaranteed level of service. Thus, a backup solution is needed to protect your data.

      Server Basics

      For clients still using servers, those resources need to be protected as well – to at least the same degree of protection as the workstations. Servers need to be deployed with similar Patch Management, EDR and backup solutions. Servers should have complete immutable and secure backups to enable granular file restores as well as “bare-metal” restores for disaster recovery.

      Better Security

      Protecting the “network”

      Building on the basic protections at the workstation and server level, additional protections need to be deployed to further protect your resources. While EDR-based solutions will detect and respond to a great majority of “downloaded” compromises, EDR won’t detect those cases where an attacker gains access to your cloud-based data, or other important external websites.

      MDR/XDR solutions add to the “endpoint” EDR. MDR is “Managed Detection and Recovery” and adds real-time analysis of cloud-based environments as well as integration with EDR and other devices such as firewalls and other network devices. MDR digests data from all these platforms in real-time, analyzes and provides automated and human response as necessary. Thus, MDR solutions provide a much more proactive, real-time solution for a much broader view of the entire network.

      Web Filtering

      Web Filtering solutions provide the ability to “categorize” web activities and allow or deny access to categories of websites based on an organization's needs. Most solutions also have the built-in capability to automatically deny access to known “command and control” or known infected systems that are a primary source of actual malware. The web filtering solutions thus provide an additional level of protection by preventing access to a malicious website that a user may inadvertently access through an email link or document that references an external site to download malware.

      Protecting the Human Resources

      Since the Human Element is still a primary weak point in Cybersecurity defense, we suggest training and testing the users, to provide them the knowledge tools they need to combat breach attempts. Regular Cybersecurity awareness training generally leads to a 70% reduction in security-related risks (Keepnet).  A regular regime of monthly targeted short training videos, slide decks or other web-based materials on pertinent topics such as how to spot phishing attempts, social engineering, safe surfing and password management helps keep people more aware and less apt to fall for a phishing or other breach attempt. Furthermore, regular simulated phish messages, configured to bypass filtering  can test the users to see how they actually perform against phishing attempts.

      So where does Compliance and Risk Management come into play?

      All the above topics relate primarily to prevention. All this is fine and good until the prevention measures fall short. At some point, no matter how many blocks are put in place against malware, something will slip by. A breach to almost any organization can prove catastrophic.

      Cyber Insurance is becoming almost mandatory for any business to protect their assets in the event of any sort of breach. The challenge is that many organizations complete the Cyber Insurance questionnaire by checking boxes—without confirming that proper procedures or evidence are actually in place. For example, a common question is: “Have you implemented strong password policies?” Simply telling employees to use strong passwords isn’t enough to qualify as a valid “yes.”

      If a breach occurs, your insurance provider will expect proof that all conditions were met. Without it, your claim will likely be denied.

      Recent studies show that more than 40% of Cyber Insurance claims go unpaid—most often because of incomplete, inaccurate, or misleading information provided on the application (Asaff).

      The Cyber Insurance questionnaires are treated as factual statements. If discrepancies are discovered during a claim review, they can become grounds for denial of coverage.

      Going further than Cyber Insurance, many organizations are subject to federal, state, and industry regulations that put further compliance requirements on organizations. For instance, any organization dealing with medical data is subject to stringent HIPAA regulations. Any financial-related organization is subject to FTC Safeguard regulations. Any organization that handles credit cards is subject to PCI requirements. Many of these regulations carry very stiff penalties for non-compliance and in the event of a breach, can be disastrous to the organizations if they aren’t diligent in their policies, procedures, controls and evidence.

      So how do you ensure compliance?

      To fully protect your organization, any Cyber Insurance policy requirements as well as further federal, state and industry regulations must be strictly met. The various protections mentioned earlier for endpoints, servers and network are only a starting point. Compliance is more than just completing a checklist saying you are doing everything needed. Organizations must have clear policies in place, acknowledged by all relevant employees, along with procedures and controls that put those policies into action. Equally important is maintaining ongoing evidence to demonstrate that these measures are effective.

      Compliance isn’t a one-time task—it’s an ongoing process that requires continuous testing, monitoring, and review to ensure lasting protection and effectiveness.

      Regular network scans (quarterly is best, or at minimum annually) that automatically analyze the environment for Patch Management, stored personal information (PII), weak passwords or poor password management, and out-of-date software can provide excellent data on a regular basis. Automated analysis of a cloud-based environment provides valuable information for further review or action.

      Additionally, maintaining a regular cadence of policy creation, review, and employee acknowledgment ensures that the entire organization has clear documentation and procedures in place. Recommended or required policies may include:

      • Acceptable Use Policy
      • Access Control Policy
      • Remote Access (work from home) Policy
      • Backup and Recovery Policy
      • Vendor Risk Management Policy
      • Security Awareness Policy

    One of the most important policies then becomes an Incident Response Policy and Procedure (IRPP) that defines how your organization will respond to a variety of incidents as well as a Written Information Security Plan (WISP) that provides the full suite of documentation that can be used to prove compliance to any regulations that apply to the organization.

    These policies need to be backed up with procedures and acceptance/acknowledgement by all pertinent staff members

    A platform that combines appropriate regulation selection, their required policies and controls, automated third-party scanning (internal and external vulnerability analysis including endpoints, cloud environment and internet interfaces), accepted policy templates, automatic policy acceptances, automated and manual evidence collection and WISP creation makes compliance and risk management easier, faster, and far less stressful for your organization.

    Conclusion

    There are not many companies or organizations that can truly say they don’t need Cyber Insurance at a minimum. Many organizations are subject to further regulatory requirements (HIPAA, PCI DSS, CMMC, FTC Safeguards and others) that require not only the very basic Cybersecurity protections but also require further compliance with very specific controls to ensure the IT environment is always as secure as possible. Compliance can be very difficult, but the risk of non-compliance is huge, whereas non-compliance can put many companies out of business.

    About the Author

    Barry Hassler is the founder and President of Hassler Communication Systems Technology, Inc (HCST), a business IT Managed Services Provider based in Beavercreek OH. HCST serves the greater Dayton and Springfield Ohio area (and beyond), specializing in managed IT services, Cybersecurity and risk management, Microsoft 365 cloud services, backup solutions and disaster recovery, and Voice-over-IP (VoIP) telecommunications. Barry is a certified compliance consultant.

    References and Supplementary Materials 

    Hoffman, Zack. “Cyber Insurance Challenges: Why Premiums Are Rising, and Coverage Is Harder to Obtain | CyberMaxx.” CyberMaxx, 23 Oct. 2024, www.cybermaxx.com/resources/cyber-insurance-challenges-why-premiums-are-rising-and-coverage-is-harder-to-obtain.

    Scroxton, Alex. “Data Breach Class Action Costs Mount Up.” ComputerWeekly.com, 24 Apr. 2025, www.computerweekly.com/news/366622911/Data-breach-class-action-costs-mount-up.

    Palatty, Nivedita James. “64 Cyber Insurance Claims Statistics 2025.” Astra, 27 June 2025, https://www.getastra.com/blog/security-audit/cyber-insurance-claims-statistics/.

    Palatty, Nivedita James. “81 Phishing Attack Statistics 2025: The Ultimate Insight.” Astra, 19 August 2025, https://www.getastra.com/blog/security-audit/phishing-attack-statistics/.

    Bonnie, Emily. “110+ of the Latest Data Breach Statistics [Updated 2025].” Secureframe, 3 January 2025, https://secureframe.com/blog/data-breach-statistics.

    Spys, Denys. “Phishing Statistics in 2025: The Ultimate Insight | TechMagic.” Blog | TechMagic, 4 Aug. 2025, www.techmagic.co/blog/blog-phishing-attack-statistics.

    Alder, Steve. “Kettering Health Resumes Normal Operations for Key Services Following Ransomware Attack.” HIPAA Journal, 13 June 2025, www.hipaajournal.com/kettering-health-ransomware-attack.

    Bruce, Giles. “Kettering Health Says Data Breached in Ransomware Attack.” Becker’s Hospital Review | Healthcare News & Analysis, 28 July 2025, www.beckershospitalreview.com/healthcare-information-technology/cybersecurity/kettering-health-says-data-breached-in-ransomware-attack.

    Keepnet Labs. “2025 Security Awareness Training Statistics.” Keepnet Labs, 23 July 2025, keepnetlabs.com/blog/security-awareness-training-statistics.

    Khalil, Mohammed. “Cyber Insurance Claims Statistics: Inside the Stats on Denials, Costs, and Coverage Gaps.” DeepStrike, 29 June 2025, deepstrike.io/blog/cyber-insurance-claims-statistics.

    Asaff, Kate. “Think You’Re Covered? 40% of Cyber Insurance Claims Say Otherwise.” Portnox, 23 May 2025, www.portnox.com/blog/compliance-regulations/think-youre-covered-40-of-cyber-insurance-claims-say-otherwise.

  • 08/20/2025 10:33 AM | Marla Halley (Administrator)

    Software leaders face immense pressure. You’re expected to deliver high-quality products under tight deadlines, all while managing costs and keeping your team from burning out. Bugs, missed deadlines, scope creep, and unrealistic demands are often seen as part of the job.

    If this sounds familiar, you’re not alone. In a recent Lighthouse Technologies survey of 110 software leaders, 27% reported experiencing burnout—a direct result of constant rework, late nights, and endless firefighting.


    Many leaders accept this as the status quo, but it doesn’t have to be your reality. There is a better way! You can transform your team’s productivity and restore their work-life balance, allowing them to focus on what truly matters most—both at work and at home. Sound too good to be true? Here are three steps to get started.

    1.   Stop Managing Symptoms. Start Uncovering Root Causes.

    Quality issues, missed schedules, and productivity challenges aren’t solved by throwing more people or hours at them; they’re solved by uncovering and addressing the root causes.

    Consider a 250-person development team we worked with. They were five years into a two-year project—stuck in beta, drowning in open defects, and unable to release. Frustration was high for everyone, from customers to developers to executives.

    Our initial Root Cause Analysis uncovered a shocking number of findings — 475 to be exact. One of the most critical? A high volume of overly complex code. Cyclomatic complexity, a measure of the number of unique paths through a piece of code, is a leading indicator of risk. Fragile code with high complexity is difficult to test, hard to maintain, and a breeding ground for bugs, and this complexity is a core reason that when a developer goes in to fix a bug or make an enhancement they likely break something that previously worked.

    • A code module with 10+ branches is considered fragile.
    • Modules with 51+ branches are considered untestable.

    This client had 1,655 complex modules, representing 9.5% of their entire system. This wasn’t just a technical problem; it was a business problem.

    ACTIONABLE INSIGHT: Complex code = Fragile code.

    Use tools like SonarQube to regularly monitor cyclomatic complexity. A good goal is less than 1.5% of your software modules have complexity greater than 10.

    2. Close the Defect Loop & Restore Confidence

    The same team was discovering 22.1 new defects per day—but fixing only 20.3 per day. To make matters worse, their bad-fix rate was 25%, meaning every fourth “fix” broke something else.

    The result? An ever-growing backlog of bugs and sinking delivery confidence. This isn’t just about an overloaded team; it’s about a broken system that erodes customer trust, developer morale, and leadership’s confidence in their team.

    ACTIONABLE INSIGHT: Track your defect backlog and bad-fix rate over time. A high bad-fix percentage signals broken processes that need urgent attention—not just more testing.

    3. Establish clear release exit criteria

    Why does release readiness matter? We all want to know how well the software will work once it is released and how many issues our customers are likely to discover. Most companies simply plan 30 days of testing for major releases regardless of the number of defects being discovered. If you imagine that your team found 10 defects/day for the last 5 days, it’s bloody likely they will find 10 more defects on the 31st day (if they are allowed to continue). To improve release readiness, we need to track and report on defect data so management can make informed release decisions.

    As an example, the below graph shows the team's predicted defects—worst case (blue), best case (green), and actual (black). By their scheduled release date (Feb 19 – the vertical, black dotted line), the team had discovered far fewer defects than expected. In fact, they had been discovering 5 defects/day for the past two weeks and the rate was steady. Additionally, they were approximately 100 defects short of the plan. Fewer bugs might sound good, but it’s often a red flag for insufficient testing.


    Without this data, the client would have released a bug-ridden product, leading to customer frustration and more firefighting. Instead, they used the data to justify pushing the release and empowering their team to get creative with testing (see the blue oval).

    The result? They released a system their customers loved, and the team not only got to celebrate their first win in what felt like forever, but also reclaimed their nights and weekends.

    ACTIONABLE INSIGHT: Whether doing manual or automated testing, a tester’s job is not to execute test cases; it's to find unique defects. Encourage your team to think creatively and critically. This will empower your team, improve company culture, and lead to better software!

    You Can’t Manage What You Don’t Measure

    This transformation didn’t happen by chance. It happened because the team stopped guessing and started measuring. By shining a light on the root causes—not just the symptoms—they were able to:

    • Resolve production issues
    • Improve customer satisfaction
    • Restore delivery confidence
    • Finally breathe again

    You don’t have to choose between delivering great software and protecting your team’s work-life balance. With the right data and processes, you can achieve both. That’s why at Lighthouse Technologies we live by the principle: you can’t manage what you don’t measure. If you’d like to improve your quality, schedule, productivity and work-life balance, let’s have a conversation and explore this together. 

    Special Opportunity for Technology First Members

    Project managers know the triple constraints of quality, schedule, and cost are inextricably tied together. As we have helped software teams improve for the past twenty years, we realized that culture also plays a crucial role – the team must have psychological safety to raise issues and bring ideas forward. Our Software Performance Benchmark is designed to baseline your team’s current quality, schedule, cost, effort, and culture Key Performance Indicators (KPIs). From there, we baseline these KPIs against industry data to help you identify opportunities for improvement and chart a data-driven path forward to success. Remember – You can’t manage what you don’t measure.

    The Software Performance Benchmark is normally only $10,000, but for Technology First members, we are offering it at a 50% discount. Not only that, if we don’t find at least a 20% improvement, it’s a full money-back guarantee. If you're ready to stop managing symptoms and start solving problems, reach out to us at team@lighthousetechnologies.com!

    About the author:  After nearly two decades as a software developer and test engineer for the U.S. Air Force, where he built automated testing platforms and helped his team achieve CMM-3 certification, Jeff Van Fleet discovered his passion for transforming how software teams work. He founded Lighthouse Technologies to help organizations boost productivity, rescue struggling projects, and manage complex implementations through streamlined processes and agile practices. Outside of work, he enjoys hiking, baking bread, telling Dad jokes, and cheering for Penn State and the Pittsburgh Steelers—all while prioritizing balance as a husband and father.

  • 08/14/2025 10:12 AM | Marla Halley (Administrator)

    I am experienced in delivering value to companies via projects and programs.  This profession has led me to be extremely involved in the Project Management Institute (PMI) organization.  As the Co-Chairman of developing the PMI Business Analysis Practice Guide 2.0, I led a team charged with defining and refining how skills and competencies shape professional excellence.  I worked with an international team and led a lot of interesting discussions about roles and how the skills needed to perform were ever changing.  We ended up with a document that complements other PMI standards by providing detailed techniques that can be used in conjunction with broader project management frameworks.  This has led to several thoughtful discussions with like-mined professionals on “how do we develop the Workforce?” to meet ever changing environments to deliver value.

    But in applying those Business Analysis practices in real-world technology and business environments, I realized there was a missing piece. Skills and competencies — while critical — don’t fully explain why some professionals excel and others plateau. The difference often lies in personal attributes: the enduring qualities like adaptability, resilience, and integrity that influence how a person learns, applies, and sustains their capabilities.

    The strength of an organization’s workforce is not built on skills alone. It’s the synergy between personal attributes, competencies, and technical and soft skills, all working within the framework of a strong corporate culture, which drives lasting success.  If you want to develop a workforce, you must foster an environment that thrives on personal growth. 

    Understanding the Three Building Blocks

    Figure 1 Building Blocks Illustrates the relationship between Individual skills, competencies, and personal attributes.  The center of this relationship is the corporate culture.  If you do not live daily value of personal growth and workforce development.  We can break down these building blocks into 3 categories.

    Figure 1 Building Blocks

     
    1. Skills – The Practical Abilities

    Skills are the specific, teachable abilities that can be measured and improved. They can be technical (e.g., cloud architecture, data analytics) or soft (e.g., negotiation, presentation skills). While essential, skills alone don’t ensure role success — they need to be applied within the right context.

    This is the basis for a change management skills gap analysis exercise.  It can include technical skills, people skills, and business acumen.  Acquiring literacy of an external domain (such as AI) may also represent an opportunity for workforce development.

    2. Competencies – The Integrated Capabilities

    Competencies are broader than skills, combining knowledge, technical ability, and behaviors. For example, the competency of cybersecurity leadership includes threat analysis, incident response, communication under pressure, and ethical judgment. Competencies reflect not just what someone can do, but how they consistently perform.

    This is an area where roles don’t matter, but the functions do.  For example, a person may be assigned a role as a project manager but performs a lot of business analysis in defining the project “definition of done”.  Sending the person to training on business analysis will help their overall competencies as a change agent for the organization.

    3. Personal Attributes – Human Foundation

    Attributes such as resilience, curiosity, empathy, and integrity influence how individuals approach challenges, adapt to change, and engage with others. These traits are often more difficult to teach, but they determine how effectively a person develops and applies both skills and competencies.

    This is an individual trait, and you cannot teach it or force it on an individual.  What you can do is encourage it.  Critical attributes in workforce development might include individual commitment to personal and professional growth, curiosity, and relating something that seem extraneous to their personal sphere of influence.


    The Corporate Culture Connection

    Corporate culture shapes — and is shaped by — the way these three elements interact.

    • A culture of continuous learning encourages employees to develop new skills regularly.
    • A collaborative culture fosters competencies like teamwork and cross-functional problem-solving.
    • A values-driven culture reinforces personal attributes such as trustworthiness and accountability.

    When culture and development are aligned, organizations create a self-reinforcing cycle: employees gain the capabilities they need, apply them effectively, and model behaviors that strengthen the culture for the next generation of talent.

    Why It Matters for Workforce Development

    Workforce development is important for evolving your team to meet challenges in the workplace.  Investing in the team, both formally and informally, is an intangible benefit.  While pursing a production issue, the team feels free to relate previous like experiences can generate a story that illustrates the problem better than volumes of technical documentation and YouTube video.  Some of these intangible benefits include:

    • Higher retention due to stronger employee engagement.
    • Better adaptability to new technologies and market shifts.
    • More effective leadership pipelines with candidates ready to step into critical roles.
    This holistic approach turns workforce development into a strategic advantage rather than a reactive necessity.

    Practical Steps for Leaders

    Steps for your workforce development really need to be a conscious sustainable activity.  It’s not just putting together bullet points in January to sit on a shelf gathering dust until the next January when you dust them off and change a few words and you’re good to go.  And it doesn’t have to involve an elaborate HR campaign.  I recommend you work with each member of the team and fill out a simple 4-quadrant card.

    Review this quarterly, or maybe even monthly.  This is not a career pathing exercise, it is a personal growth exercise.

    Many corporations punt on growth by telling individuals you are in charge of your career and we won’t give you a career path.  This approach to workforce development is how to grow yourself.  And to you nay-sayers that argue “if we develop them, they might quit and go to another company”.  That is a risk, but they may do that to go to another company because they don’t feel valued due to lack of development.

    The End Goal

    My journey from writing about skills and competencies in the PMI Business Analysis Practice Guide 2.0 to exploring their interplay with personal attributes reinforced a vital truth: technical excellence alone isn’t enough.

    You can be taught technical tools and various business processes, but to develop the workforce, you need to develop a corporate culture that considers skills, competencies, and personal attributes.  Not consider workforce development but as an investment.  AI can’t replace personal attributes of curiosity and telling a story of an experience to clarify a situation.

    The real magic happens when an organization intentionally aligns skills, competencies, and personal attributes within a culture that values and develops all three. That’s where capability meets commitment — and where organizations create lasting impact.  To quote the Ohio State Football Legend Woody Hayes, “You Win with People”.  Your organization needs that perspective when it comes to Workforce development.

    About the author:  David Davis is a recognized thought leader and seasoned Program/Project Manager with over 20 years’ experience leading large-scale business transformation, process improvement, and change management initiatives. He is skilled at bridging strategy and execution, fostering stakeholder trust, and driving measurable benefits through disciplined agile practices, benefits realization, and cross-functional collaboration.


<< First  < Prev   1   2   3   4   5   ...   Next >  Last >> 

MEET OUR PARTNERS

Our Cornerstone Partners share a common goal: to connect, strengthen, and champion the technology community in our region. A Technology First Partner is an elite member leading the support, development, and expansion of Technology First services. In return, Partners improve community visibility and increase their revenue. Make a difference in our region and your business.

CHAMPION PARTNER

The McCracken Group (TMG) is proud to be a Champion Partner of Technology First. We share a commitment to education, collaboration, and empowering technology professionals across our tech region. Together, guided by our core values, Doing the Right Thing, Always Learning, Building Strong Relationships, and Giving Back, we’re helping advance innovation and continuous growth across our region’s tech community.
The McCracken Group

CORNERSTONE PARTNERS

© 2026 Technology First. All rights reserved.