Menu
Log in
Log in

Tech News Blog

Connect with TECH NEWS to discover emerging trends, the latest IT news and events, and enjoy concrete examples of why Technology First is the best connected IT community in the region.

Subscribe to our newsletter

<< First  < Prev   1   2   3   4   5   ...   Next >  Last >> 
  • 03/26/2026 2:05 PM | Marla Halley (Administrator)


    • I’m sure you have heard the trending news by now……the United States is experiencing a labor shortage that is affecting many industries like manufacturing, logistics, healthcare, transportation and agriculture[1].  This news has been reported in regional newspapers, business magazines as well as trade journals. These industries should sound very familiar to you as they are all very much in demand in most markets across the country.  What is causing this shortage?  What can employers do about the shortage? The cause is a result of several different factors like our nation’s aging population, low unemployment, people opting out of the workforce, working conditions, skills gaps and the school to job pipeline.  There is good news in many of these causative factors.  Employers across the country still have a lot of control with many of the factors leading to shortages, and they can grab the bull by the horns with robust workforce planning for their company.

      Workforce Planning Primer

      The Society for Human Resources Management advocates for all companies to practice the art of workforce planning[2]. There is no specific formula for assisting with workforce planning because every company has very different needs, based on their culture, their strategic plan and their goals and objectives.  With this said, your company can develop a workforce plan that can assist in guiding your company to a robust workforce pipeline that will better position your company for accomplishing your goals and objectives. The steps in the planning process are very straightforward:

    1.     Conduct a supply analysis.
    2.     Conduct a demand analysis.
    3.     Conduct a gap analysis.
    4.     Formulate a solution (plan).

    The purpose of the supply analysis is to ensure your company knows the regional and national suppliers of potential employees for your company. This analysis should look at the numbers available and skills and should also look at demographics to include generational representation. This analysis will help your company with projections for the future based upon retirements and resignations, and it can also assist in examining different types of workforce that can be tapped into (e.g. new graduates, upskilling of current employees, veterans, people with disabilities, older adults, reentered citizens, and new Americans). Through the examination and utilization of all of these categories, you can create a very resilient workforce for your company.

    The demand analysis will help you to design your future workforce composition. This is where your strategic plan will come in handy to ensure you are recruiting your required skillsets in response to your new and expanded business.  This is the analysis where you are asking yourself about the skillsets and experience that you need in your company in all of your positions. You’ll want to review your job descriptions very closely to ensure they are accurate and offer your company ultimate flexibility. 

    The gap analysis makes a comparison between what you said you need in your demand analysis and what is available through your supply analysis. 

    Last but not least is your solution analysis.  Your solution should include how you plan to recruit to meet your needs, as well as plans for upskilling current employees and potentially tapping into new resources that maybe your company has never tapped into in the past.

    Easy to find Labor Resource

    One readily available resource in every market is students at all levels of education. High School through Graduate Students across the country are looking for work-based learning opportunities that will help them to discern their appropriate career choices. In Ohio, we are really lucky to have about 400K High School students in our State and close to 400K College and University students.  Imagine if we could keep the vast majority of these students in our State.  Our labor shortage in Ohio would be long gone! 

    Cassie is President of The Strategic Ohio Council for Higher Education (SOCHE), a non-profit organization that specializes in assisting employers to recruit, train, employ, develop, and manage interns. After 59 years in business, SOCHE knows that today’s interns are tomorrow’s workforce!  If you’re looking for talent, let SOCHE help connect you with your next generation of workforce.  Reach out to soche@soche.org or www.soche.org.

    [1] https://www.uschamber.com/workforce/understanding-americas-labor-shortage

    [2] https://www.shrm.org/topics-tools/tools/toolkits/practicing-discipline-workforce-planning


  • 03/01/2026 11:25 AM | Marla Halley (Administrator)

    • AI is no longer experimental. According to the 2026 Software Lifecycle Engineering Decision Maker Survey, 76.6% of organizations are actively using AI in development workflows, with another 20.4% evaluating its implementation. Only 3.1% remain disengaged. [ 1]

      Automation and AI have reshaped how we build, deploy, and protect software. From speeding up code delivery to enhancing threat detection, these systems promise speed, consistency, and scale. But automation isn’t infallible—and when it goes wrong, because it has, the consequences ripple across entire industries. This raises an important question: who is accountable when automated systems fail, and how should we rethink risk transfer in response?

      When Automation Fails at Scale

      In July 2024, a routine security update triggered a global technology outage that left millions of Windows machines unusable overnight. A flawed configuration update for the widely deployed endpoint security agent caused systems to crash into boot loops, disrupting airlines, hospitals, broadcasters, banking systems, and emergency services.

      The root cause? A bug in the internal validation process—a tool meant to ensure updates were safe. Instead, it mistakenly allowed a defective update to reach customers’ systems. This wasn’t a cyberattack. It was a failure of automated testing and quality assurance.

      The fallout was vast even with a swift response. While many systems were restored within days, the financial toll on individual organizations was significant. Delta Air Lines alone claimed hundreds of millions of dollars in losses due to canceled flights and operational chaos. The scale of disruption underscores a fundamental truth: automation amplifies both benefits and failures.

      The Illusion of Infallible Automation

      Automation is often sold as a panacea. It promises faster releases, fewer human errors, and continuous delivery at scale. But experts have noted that automated validation systems are still software, and therefore still prone to defects. As one analysis observed after the outage, automation tools can miss edge cases or malformed data exactly because they operate within predefined assumptions.

      In the 2024 incident, the content validator allowed a defective configuration file to pass through because its own logic failed to spot the mismatch. This gap illustrates an important point: automation inherits the limitations and blind spots of its creators and its design. No matter how sophisticated, automated testing can only check for what it’s designed to anticipate.
      Equally consequential is the practice of deploying updates globally without staged rollouts or “canary” testing that limits blast radius. Had the faulty update been deployed to a small subset first, the outage might have been contained before it became global.

      Accountability in a Fragmented Risk Landscape

      Traditionally, software vendors deliver products with liability clauses that limit financial exposure. Customers bear much of the operational risk when something goes wrong. This model assumes vendors won’t be at fault too often, and that organizations will manage their own risk through internal controls, testing environments, and contingency planning.

      As dependency on thirdparty tools increases across businesses, the lines of accountability grow blurrier, and ecosystem risk grows.

      From Insurance to Guarantees: Shifting Risk Transfer Models

      One emerging response in the cybersecurity ecosystem is the integration of financial risk transfer mechanisms alongside technical tools. Traditional cyber insurance policies have long been used to shift risk—covering costs associated with breaches, ransomware attacks, and business interruptions. These operate reactively and often exclude systemic or vendor-related failures.

      In contrast, some companies have begun offering guarantees or warranties backed by insurance that tie performance outcomes to financial protection. For example, one deep learning-based cybersecurity provider teamed with an insurer to offer a performance guarantee with ransomware warranty coverage up to millions of dollars—signaling confidence in both product effectiveness and risk mitigation.

      Similarly, cyber warranty programs embedded with solutions now exist where customers receive financial backstop in the event of qualifying incidents. This helps to cover forensic costs, legal fees, or response activities.

      These approaches represent a shift from purely technical performance to outcome-based assurances, essentially placing some level of financial accountability on the provider when specific guarantees aren’t met.

      Why This Matters Now

      The cyber insurance market itself reflects a changing risk calculus. Claims have surged, particularly around ransomware and third-party failures, pushing insurers to tighten underwriting and scrutinize vendor risk more closely. In some cases, insurers now demand continuous monitoring and proactive threat mitigation as prerequisites for coverage.

      Meanwhile, hybrid models that blend warranty and insurance help bridge gaps between technical defense tools and financial resilience. They push organizations—and the vendors they work with—to think beyond feature checklists toward shared accountability for outcomes.

      A Framework for Shared Accountability

      As digital systems continue to grow in complexity and interdependence, organizations need a framework that acknowledges both technical and financial aspects of risk.

          More granular vendor commitments increase trust in product performance.

          Integrated risk transfer ensures incidents don’t derail business continuity.

          Retaining human oversight ensures automation enhances judgment.

          Continuous feedback turns failures into systemic improvement.

    Bridging Promise and Trust

    Automation and AI have immense potential to drive efficiency and scale, but their unchecked use can mask latent risks. As the industry evolves, true accountability will come from aligning technical performance with shared financial responsibility and risk management frameworks.

    Closing the accountability gap isn’t about eliminating automation. It’s about designing systems, contracts, and risk policies that recognize the shared stakes of all parties involved—vendors, customers, insurers, and regulators alike.

    About the Author:

    Michael Benzinger, Vice President, Director of Engineering for Cardre Information Security.

    [1] https://futurumgroup.com/press-release/ai-reaches-97-of-software-development-organizations/?utm_source=chatgpt.com


  • 02/26/2026 3:42 PM | Marla Halley (Administrator)

    Manufacturing environments are becoming more automated, more connected, and more complex than ever before. While this progress unlocks efficiency and productivity, it also introduces new vulnerabilities. When something goes wrong on the plant floor, the speed at which you recover can mean the difference between a minor disruption and a costly production shutdown.

    Common Problems That Disrupt Production

    1. Downtime caused by missing or outdated program backups
    A machine goes down unexpectedly. The maintenance team investigates and discovers the PLC program has been modified at some point—but no one knows when, why, or by whom. Worse, the only backup available is months old or incomplete. Production stops while engineers scramble to rebuild or recover the correct version. What should have been a quick fix turns into hours—or even days—of lost output.

    2. Untracked PLC or robot changes
    On a busy shop floor, multiple technicians, engineers, and contractors may access control systems. Without a centralized way to track changes, small adjustments can create big problems. A line that ran perfectly yesterday suddenly behaves unpredictably today. Without an audit trail, troubleshooting becomes guesswork.

    3. Audit and compliance challenges
    Whether driven by internal standards, customer requirements, or regulatory bodies, many manufacturers must demonstrate control over their automation assets. When documentation is scattered across personal laptops, USB drives, or outdated servers, preparing for an audit becomes stressful, time-consuming, and risky.

    4. Knowledge loss due to employee turnover
    Experienced technicians often carry critical knowledge about machine configurations and program changes. When they leave, retire, or change roles, that knowledge can disappear with them. The next time an issue occurs, teams may struggle to understand how systems were configured or why certain changes were made.

    These scenarios are more common than many organizations would like to admit. The good news is that they are also preventable.

    Building Resilience with Octoplant

    Resilient manufacturing operations are not just about preventing downtime—they are about recovering quickly and confidently when issues occur. This is where Octoplant plays a critical role.

    Octoplant is designed to centralize, secure, and manage automation assets across the entire plant. Instead of relying on scattered backups and manual documentation, manufacturers gain a single source of truth for their control programs and configurations.

    Here’s how Octoplant helps companies improve resilience and reduce time to recovery.

    1. Automated, reliable backups
    Octoplant performs regularly scheduled backups of PLCs, robots, HMIs, and other automation devices. This ensures that the most recent, validated version of each program is always available. When a failure occurs, maintenance teams can quickly restore the correct version—eliminating guesswork and reducing downtime.

    2. Full change tracking and audit trails
    Every change to a control program is tracked. Teams can see who made the change, when it happened, and what was modified. This visibility simplifies troubleshooting and ensures accountability across the organization.

    3. Centralized version control
    With Octoplant, all automation programs are stored in a centralized, structured repository. Instead of searching through multiple laptops or network folders, engineers can instantly access the latest approved version. This reduces the risk of loading outdated or incorrect programs during recovery.

    4. Faster troubleshooting and recovery
    When something goes wrong, time is critical. Octoplant provides immediate insight into program differences, recent changes, and system status. Maintenance teams can quickly identify the root cause and restore operations—often in minutes instead of hours.

    5. Knowledge retention across the workforce
    By documenting changes and storing programs centrally, Octoplant captures institutional knowledge. Even when employees leave or retire, the system retains the history and context needed to keep operations running smoothly.

    From Reactive to Resilient

    In modern manufacturing, disruptions are inevitable. Equipment fails, changes are made, and people come and go. The difference between a fragile operation and a resilient one lies in preparation, visibility, and control.

    Octoplant gives manufacturers the tools they need to move from reactive troubleshooting to proactive resilience. By ensuring that every automation asset is backed up, tracked, and recoverable, companies can protect production, reduce risk, and maintain confidence in their operations.

    When downtime strikes, resilience isn’t just about getting back online—it’s about how fast and how confidently you can do it. With Octoplant, recovery becomes a controlled, predictable process.

    About the author:

    Mike Rolfes is an account manager at ATR Automation.  ATR Automation has been providing industrial automation and electrical engineering solutions since 1956.  As we have grown with the development of our community over the last 60 years, we have become the trusted name for industrial automation software in the region. 

    To learn more about Octoplant, please reach out to ATR Automation.  I can be reached at michaelrolfes@atrautomation.com or (513) 353-1800 ext. 5037.


  • 01/26/2026 3:49 PM | Marla Halley (Administrator)

    As organizations rapidly move applications and data to cloud platforms, cloud identity providers have replaced the network perimeter as the primary security boundary. Compromising a single account can provide broad access, making identity one of the highest-value targets for attackers.

    Multi-factor authentication (MFA) was once the most effective defense against account takeover. Today, it remains necessary—but it is no longer sufficient without additional steps.

    What MFA Was Built to Prevent

    Traditional phishing attacks focused on stealing credentials. Users were tricked into entering a username and password into a fake website, which attackers then reused to log in to the real service.

    MFA disrupted this model. Even with stolen credentials, attackers could not complete authentication without access to the second factor. For years, this significantly reduced phishing-related compromises.

    That protection assumed attackers were outside the authentication flow. Modern attacks no longer operate under that assumption.

    How Adversary-in-the-Middle Attacks Bypass MFA

    Adversary-in-the-Middle (AiTM) phishing shifts the attack from credential theft to session theft.

    Instead of sending users to a fake login page, attackers proxy the real sign-in experience. The victim authenticates to the legitimate service and completes MFA normally. Behind the scenes, the attacker relays all traffic and captures the resulting session token.

    Session tokens prove that authentication has already occurred. Once issued, they allow access without requiring the password or MFA again. If an attacker steals the token, MFA is effectively bypassed.

    A Typical AiTM Attack Flow

    1. The user receives a phishing email designed to create urgency.
    2. Clicking the link routes the user through attacker-controlled infrastructure.
    3. The attacker proxies the real login service.
    4. The user enters credentials and completes MFA.
    5. The identity provider issues a session token.
    6. The attacker captures and replays the token to access the account.

    From the identity provider’s perspective, the attacker’s session is valid. Authentication already succeeded.

    Why Traditional MFA Falls Short

    Most MFA methods—SMS codes, authenticator apps, and push approvals—can be relayed in real time. AiTM attacks exploit this by forwarding challenges and responses between the victim and the real service.

    Because the session token is issued after MFA is completed, MFA alone does not prevent token theft or reuse. Defending against AiTM requires controls that either prevent token capture or limit token usability.

    Controls That Actually Reduce Risk

    Phish-Resistant Authentication

    FIDO2 security keys, passkeys, and certificate-based authentication are resistant to relay attacks. These methods cryptographically bind authentication to the legitimate service and cannot be replayed through a proxy.

    Device-Based Access Controls
    Requiring trusted devices adds a second enforcement layer. During the login process, the identity provider does an additional check to validate it is a trusted device and not an attacker’s proxy server.

    Session Token Protection
    Short session lifetimes, token binding, and continuous access evaluation reduce the value of stolen tokens and limit attacker dwell time.

    Continuous Detection
    Identity Threat Detection and Response (ITDR) tools identify anomalous behavior such as unfamiliar devices or impossible travel, enabling rapid containment when prevention fails.

    Conclusion

    MFA is no longer a complete defense against modern identity attacks. Adversary-in-the-Middle demonstrates that attackers can bypass authentication by stealing sessions instead of credentials.

    Effective identity security requires layered controls that reflect how attacks occur: phish-resistant authentication, device trust, hardened sessions, and continuous monitoring.

    Identity is now the perimeter. Defending it requires more than a second factor.

    About the Author

    Chaim Black is a Cyber Security Manager at Intrust IT. He is focused on delivering resilient security operations. He leads day-to-day security team execution while strengthening internal security posture and compliance. Chaim also serves as President of InfraGard Cincinnati, part of the FBI-private sector partnership advancing information sharing and cyber risk awareness.

  • 01/26/2026 3:16 PM | Marla Halley (Administrator)

    The shift in offensive operations over the last 18 months is unlike anything the industry has seen before. AI isn't coming for defenders, it's already here. And to make things worse, attackers are using it to outpace traditional security controls at a rate that should concern everyone.

    Here's the reality: signature-based detection was always playing catch-up. It works by recognizing things that have already been seen; file hashes, known-bad strings, IOCs pulled from last month's incident. That model assumes attackers are reusing tools and infrastructure. They're not. Not anymore.

    Polymorphism at Scale

    Polymorphic malware isn't new. What's new is how trivially easy AI makes it to generate variants. A red team operator can take a loader, feed it through an LLM-assisted obfuscation pipeline, and produce hundreds of unique builds that share zero static indicators. Different hashes, different string tables, different control flow. Same capability.

    From an offensive perspective, this changes engagement dynamics completely. Payload development and evasion used to consume significant amounts of time. Now, generating AV-bypassing variants is almost a commodity task. If authorized red teams can do it with limited resources, assume actual threat actors, with more time, more money, and no rules of engagement, are doing it better.

    The tooling exists to test payloads against defender solutions in automated loops. Spin up a sandbox, drop the payload, check detection, mutate, repeat. Iterate until clean. That's not theoretical, it's how modern offensive tooling development works.

    Why Behavioral Detection Has to Be the Focus

    If static indicators are unreliable, what's left? Behavior.

    Malware can change its code, but it still must do something. It needs to establish persistence, move laterally, touch credentials, call home. Those actions leave traces that are harder to obfuscate than a file hash.

    Competent defenders should be watching for:

    • Process lineage that doesn't make sense (Word spawning PowerShell spawning cmd.exe)
    • Authentication patterns that deviate from baseline (service accounts logging in interactively, lateral movement spikes)
    • Memory behaviors associated with injection techniques
    • Network traffic that violates expected protocol norms

    Good detection engineering focuses on these patterns, not on "did we see this exact hash before." The best blue teams aren't hunting for tools, they're hunting for tradecraft.

    IOCs Need to Get Smarter

    Most IOC feeds are noise. A hash gets burned within hours. A C2 domain is useful until the next rotation. If a detection strategy depends on someone else seeing the attack first and publishing indicators, it's always behind.

    The IOCs worth investing in are behavioral: specific API call sequences, registry key patterns associated with persistence mechanisms, authentication anomalies, protocol misuse. These tie to what the attacker is trying to accomplish, not what tool they happen to be using today. That's the important distinction.

    Anyone building custom offensive tooling knows that changing source code is easy. Changing objectives is not. Credential access is still required. Lateral movement is still required. Exfiltration is still required. Detect those actions, and the operator gets caught regardless of what the payload looks like.

    AI Works Both Ways

    Defenders have access to the same technology. Machine learning models that baseline normal environment behavior and flag deviations are genuinely useful when tuned properly and fed good telemetry. The challenge is operationalizing them without drowning in false positives.

    The environments that cause the most problems during offensive engagements are the ones with mature detection engineering programs. They're correlating endpoint telemetry with identity logs and network traffic in near real-time. They're running adversary simulations that mirror actual attacker behavior, not checkbox compliance exercises. They're hunting proactively instead of waiting for alerts.

    The Uncomfortable Truth

    Prevention won't stop every breach. That's not defeatism, it's operational reality. Attackers only need to be right once. Defenders need to be right constantly.

    The goal isn't perfection. The goal is making attacker operations expensive, noisy, and slow enough that detection happens before objectives are achieved. That means investing in detection engineering, building response capabilities that actually work under pressure, and accepting that security stacks will fail at some point.

    AI is making attacks cheaper and faster to produce. The response isn't more signatures; it's better detection of the behaviors that signatures can't catch.

    Author:

    Anthony Cihan is the Senior Principal Cybersecurity Engineer at Obviam where he leads offensive security operations and security assessments. He holds a BS is Cybersecurity and Information Assurance, the OSCP and OSWP, and has published multiple offensive security tools such as the PiSquirrel wiretap/implant and the Spellbinder SLAAC based IPv6 attack tool.


  • 12/23/2025 10:24 AM | Marla Halley (Administrator)

    The technology landscape is shifting at an unprecedented pace, driven primarily by the rapid maturity of Artificial Intelligence. For tech leaders—CIOs, CTOs, and CISOs—2026 isn't just another year; it's a pivotal moment to move from experimentation to enterprise-grade execution. Success will be defined not by the technology you adopt, but by how strategically and responsibly you embed it at the core of your business.

    Here are my top five priorities that will define the winners in 2026 and beyond.

    1. Establish Comprehensive AI Governance and Ethics

    AI is no longer a fringe tool; it's becoming the operational fabric of the enterprise. This widespread adoption, especially of Generative AI and autonomous agents, elevates the need for robust governance.

    Leaders must prioritize building a comprehensive AI governance framework that moves from policy to operation. This framework is essential for managing risk, ensuring compliance, and building customer trust. Key actions include:

    • Define Responsible Use: Implement clear, regularly updated policies for how employees can and cannot use AI tools, with a focus on data privacy and intellectual property.
    • Ensure Data Provenance: As AI models rely on vast datasets, establishing digital provenance, proving that your data and AI outputs are genuine, traceable, and compliant is critical.
    • Build-in Transparency: Design AI agents that can document and explain their decisions, allowing for essential "human-in-the-loop" review and accountability, especially in high-risk applications like hiring or customer service. 

    2. Modernize Infrastructure for an AI-Native Future

    The existing IT infrastructure, often burdened by years of technical debt, cannot support the demands of AI on a scale. AI models require massive compute power, high-speed data pipelines, and a flexible, low-latency environment.

    A core priority for 2026 must be the modernization of systems and the transition to an AI-native platform. This means:

    • Cloud Foundation: Doubling down on a full-stack, cloud-first approach that provides the necessary scalability, agility, and specialized AI supercomputing platforms.
    • Data Readiness: Creating a robust "data factory" with strong data governance to ensure the quality, security, and interoperability of the data that feeds your AI models.
    • Edge Computing: Leveraging edge computing capabilities, often via IoT, to process AI-driven data closer to where it's generated (e.g., manufacturing floors, smart cities) for real-time decision-making.

    3. Elevate Cybersecurity to Preemptive Resilience

    With AI-powered attacks becoming faster and more sophisticated, standard perimeter defense is insufficient. Cybersecurity is no longer an IT operational task; it's a board-level risk concern.

    Tech leaders must shift their focus to preemptive cybersecurity and a culture of resilience:

    • Zero-Trust Security: Fully implementing a zero-trust model across the organization, which assumes no user or device is trusted by default, minimizing the risk of internal breaches.
    • AI-Driven Defense: Utilizing AI security platforms for proactive threat detection, anomaly scoring, and automated incident response to combat AI-enhanced reconnaissance and supply-chain attacks.
    • Upskill Every Employee: Cybersecurity remains a human problem. Prioritize company-wide, continuous training that focuses on phishing, identity management, and the risks associated with deepfakes and synthetic content.

    4. Redesign the Workforce for Human-AI Collaboration

    The conversation around AI is shifting from job displacement to workforce transformation. Successful leaders will recognize that the competitive advantage lies in creating human-AI hybrid teams.

    The priority here is to cultivate the human skills that AI cannot replace and redefine roles for the new era:

    • Reskilling and Upskilling: Make continuous learning a strategic imperative, training employees in data fluency, AI implementation, and prompt engineering. The most valuable professionals will blend technical AI fluency with critical human skills like creativity, emotional intelligence, and long-term strategic thinking.
    • New Roles and Career Paths: Establish new career pathways for roles that manage, monitor, and design AI systems, such as AI Ethics Officers and Agent Orchestrators.
    • Focus on Human Judgment: Use AI to eliminate mundane tasks, freeing human workers to focus on high-value activities that require complex judgment, empathy, and strategic decision-making.

    5. Drive the Shift to Composable and Adaptable Architectures

    In a world defined by rapid change and intense competition, the traditional, monolithic application structure is a liability. Large, interconnected systems are slow to update, difficult to integrate with emerging AI capabilities, and prevent the business from responding quickly to market demands.

    Tech leaders must make the strategic priority a shift toward a composable enterprise built on modular, adaptable systems. This approach emphasizes flexibility, speed, and reuse:

    • Adopt Modular Architectures: Prioritize the full transition to microservices, containerization, and API-first design. This allows developers to quickly assemble and disassemble business capabilities (e.g., payment processing, customer login) as market conditions or new AI tools require.
    • Invest in Integration Fabric: Deploy a modern, robust integration layer (like an event mesh or sophisticated API gateway) that allows data and services to flow seamlessly between core legacy systems, cloud-native applications, and third-party vendor platforms. This is the glue that enables true agility.
    • Empower Fusion Teams: Move away from siloed IT and business units. Establish cross-functional "fusion teams" that blend business experts with low-code/no-code developers. These teams can rapidly assemble existing components to create tailored applications without waiting for lengthy, centralized IT development cycles.

    About the Author

    Parag Pujari is the Chief Information Officer (CIO) of Jurgensen Companies, where he oversees all technologies, IT strategy, IT operations and drives digital transformation initiatives to enhance business performance and efficiency. Parag has a distinguished background in IT leadership, specializing in areas such as cloud computing, ERP, cybersecurity, and enterprise architecture. Parag is crucial in aligning Jurgensen Companies’ technological capabilities with its long-term strategic business goals.

  • 12/23/2025 10:04 AM | Marla Halley (Administrator)

    I hire technology leaders for a living, so I do a lot of interviews. There are some attitudes that consistently raise red flags. These are behaviors to be cautious about in professional collaborations of any sort. Watch out for them when vetting vendors, negotiating partnerships, or considering a new job. And most importantly, avoid these behaviors yourself.

    Speaking poorly of former colleagues, partners, or customers

    This is perhaps the most common and damaging one I encounter. When a candidate casually disparages a previous boss as "incompetent" or a former team as "lazy," it immediately sets off alarms. First, anyone who gossips will gossip about you. If they're willing to breach confidentiality or loyalty with past relationships, what's to stop them from doing the same when they move on from your organization?

    Trust is foundational in tech leadership—sharing sensitive strategies, handling team dynamics, or collaborating on high-stakes projects all require discretion. A level of confidentiality is assumed. We need to feel safe to be imperfect. In innovative environments, mistakes happen as part of experimentation. Badmouthing absent parties erodes psychological safety; it signals that errors will be weaponized rather than learned from.

    Nobody can see the whole picture. Deference to the unknown is a sign of maturity. Perhaps the former colleague had unseen constraints—resource limitations, personal challenges, or higher-level directives. Mature professionals show humility by withholding judgment, opting instead for curiosity: "I wondered if there might have been factors I wasn't aware of."

    Blaming others for failures

    Flag number two is blaming. Externalizing failures when addressing a setback lacks nuance and appreciation of complexity. Tech ecosystems are intricate delays often stem from interdependent factors like ambiguous requirements, shifting priorities, or uncontrollable dependencies. Leaders who oversimplify by pointing fingers miss the systemic view needed for effective problem-solving.

    Accountability is non-negotiable in leadership. Owning outcomes, even when not directly at fault, demonstrates integrity. Blamers often avoid reflection: "What could I have done differently to mitigate this?" Blame reveals a missed growth opportunity. Those who blame others stagnate, while reflective leaders evolve.

    Casting a lost promotion or reduced scope as a betrayal

    This last one is a little more obscure, but I still hear it regularly. It emerges when discussing reasons for leaving a role. Candidates will frame being pushed to a smaller team, budget cuts, or shifted responsibilities as personal victimization—"They promised me X and then pulled the rug out."

    This kind of attitude reveals entitlement over adaptability. In dynamic tech landscapes, scopes evolve due to market shifts, funding rounds, or pivots. Resilient leaders view these as realities to navigate, not betrayals to resent. Even if it does hurt to be trusted with less responsibility, taking it as an attack reflects poor emotional regulation. Reacting with bitterness suggests difficulty handling ambiguity or disappointment gracefully—qualities essential for leading through uncertainty.

    A victim mindset fosters resentment, reducing willingness to invest in the team's success when conditions aren't ideal.

    These red flags aren't about perfection—no one has a flawless history. They're about patterns of immaturity: low self-control, ego-driven responses, and combativeness over curiosity. In contrast, professionals who earn trust speak with goodwill, own their part, appreciate complexity, and adapt without grievance.

    As you build your network—whether hiring, partnering, or job-seeking—pay attention to these signals. They imply how someone handles conflict, uncertainty, and relationships. And that awareness cuts both ways: self-reflect to avoid exhibiting them yourself. Practice pausing before critiquing absent parties; frame past experiences with ownership and nuance; view changes as opportunities rather than injustices.

    In leadership, especially technology, trust compounds success. Spot these flags early, in yourself and others, and steer toward collaborations that build it rather than erode it.

    About the Author:

    Aaron Davis is a seasoned leader and talent acquisition expert. With a career spanning over two decades, Aaron has built and led successful teams across various industries, including tech staffing, software development, healthcare, & real estate investment. He founded Reliant Search Group in 2019 and still enjoys connecting business leaders with critical talent. Aaron hosts the "Being Built" podcast, where he shares insights on business growth and leadership.

  • 11/28/2025 8:32 AM | Marla Halley (Administrator)

    Cybersecurity in 2026 is at the center of digital transformation. AI-driven threats, expanding attack surfaces, and global regulatory shifts are rewriting the rules of risk management. Leaders who understand these dynamics will shape organizations that thrive in a world where security is inseparable from innovation. These five trends highlight the changes shaping cybersecurity and why acting today sets the stage for long-term growth.

    1. AI: The Double-Edged Sword

    Artificial intelligence has become a pivotal force in both offensive and defensive cybersecurity operations. Threat actors are increasingly leveraging generative AI to craft highly convincing phishing campaigns and other social engineering attacks at scale. According to SentinelOne’s 2025 report, phishing attacks surged by 1,265% year-over-year, largely driven by the adoption of GenAI in attack workflows. In response, defensive AI systems are employing behavioral analytics and predictive modeling to detect anomalies and mitigate threats in real time, aiming to counter the growing sophistication and volume of AI-enabled attacks.

    The implications extend far beyond phishing. Gartner predicts that by 2027, AI agents will reduce the time it takes to exploit account exposures by 50%, dramatically increasing the speed and scale of credential theft and account takeover attacks. This trend highlights a critical shift toward automation in cybercrime, forcing organizations to rethink response strategies and invest in adaptive security models that can keep pace with evolving threats. Organizations that fail to anticipate this shift risk facing attacks that surpass traditional defenses, leaving critical systems exposed in a matter of minutes.

    2. The Rise of Zero Trust Architecture

    Zero Trust Architecture (ZTA) has transitioned from conceptual to operational, now embedded across critical sectors like finance, healthcare, and government. It mandates verification of every access request, independent of origin or device. Micro segmentation and continuous authentication are considered foundational practices. Gartner predicts that by 2026, 10% of large enterprises will have a mature and measurable Zero Trust program in place. This trend highlights the growing focus on building resilient security frameworks to counter evolving cyber threats.

    3. Rising Risks in Operational Technology

    The rapid expansion of connected Operational Technology (OT) devices is introducing new vulnerabilities across enterprise and industrial environments. These systems, which control critical processes, are increasingly interconnected, making them attractive targets for cyberattacks. To reduce risk and maintain operational continuity, security teams are prioritizing measures such as firmware integrity checks and network segmentation.

    Large-scale environments like smart cities and industrial systems face heightened exposure because of the sheer number and diversity of connected devices. According to IBM’s Cost of a Data Breach Report, the impact is significant: in 2025, 15% of organizations experienced OT-related breaches, and nearly a quarter of those incidents caused direct damage to OT systems or equipment, with an average cost of $4.56 million per breach.

    This expanding attack surface demands a shift toward asset-centric security models and real-time monitoring to prevent lateral movement and supply chain compromise.

    4. Endpoint Detection and Response: The Frontline of Cyber Defense

    In many cases, endpoints serve as the most accessible target for attackers. In a world of hybrid work and distributed networks, attackers often target laptops, mobile devices, and other endpoints as their primary entry point. Traditional antivirus tools, designed to detect known signatures, cannot keep up with advanced threats such as fileless malware, credential theft, and AI-driven exploits.

    EDR takes a proactive approach by continuously collecting and analyzing data from every endpoint on the network, including processes, performance metrics, network connections, and user behaviors. By storing this data in a centralized cloud-based system, EDR enables security teams to identify anomalies quickly and respond before attackers can move deeper into the network. When a threat is detected, EDR can immediately isolate the compromised device, preventing further spread and minimizing impact. IBM research shows that 90 percent of cyberattacks and 70 percent of breaches originate at endpoint devices, making robust monitoring and response capabilities a top priority. Organizations that rely solely on traditional antivirus remain vulnerable to modern attack techniques. To maintain resilience and respond quickly to threats, EDR should be a core component of every security strategy.

    5. Preparing for the Quantum Era

    Post-Quantum Cryptography (PQC) introduces cryptographic algorithms designed to withstand the computational power of quantum computers, which threaten to break traditional encryption methods like RSA and ECC. Instead of relying on current mathematical problems vulnerable to quantum attacks, PQC uses lattice-based, hash-based, and multivariate polynomial schemes that remain secure even in a quantum-driven world.

    The urgency for PQC adoption is growing as organizations recognize the long-term risk of “harvest now, decrypt later” attacks. Sensitive data encrypted today could be compromised in the future when quantum computing becomes mainstream. Gartner predicts that by 2029, advances in quantum computing will render applications, data, and networks protected by asymmetric cryptography unsafe, and by 2034, these methods will be fully breakable. Similarly, a Forbes Technology Council report highlights that quantum computing is now considered a top emerging cybersecurity threat, prompting U.S. policymakers to push for immediate preparation across both government and industry.

    PQC allows organizations to strengthen their encryption for the future while maintaining efficiency and compatibility with existing systems. By integrating quantum-safe algorithms into existing systems, businesses can maintain compliance, secure cloud environments, and protect IoT ecosystems against next-generation threats. This shift transforms cryptography from a static safeguard into a resilient, adaptive defense for the quantum era.

    Conclusion

    Cybersecurity in 2026 is about staying ahead of threats before they emerge. AI-powered defenses, Zero Trust principles, and quantum-resistant cryptography are becoming standard practices for organizations that want to remain resilient. The companies that treat security as a core business strategy will be best positioned to protect assets, uphold compliance, and foster sustainable growth.

    Strengthen Your Cybersecurity Strategy

    At The Greentree Group, we help organizations protect critical data with comprehensive cybersecurity solutions. We work with federal, state, local, and commercial clients to identify threats, prevent vulnerabilities, and strengthen system security. Contact us today to take a proactive step toward securing your business.

    About The Author:

    Mackenzie Cole is an Analyst at The Greentree Group and a proud Wright State University alum, specializing in marketing strategy and analytics. With a passion for turning insights into impactful campaigns, Mackenzie has worked on a variety of multi-channel marketing initiatives with a focus on technology, creative storytelling, and connecting with local communities through purpose-driven marketing.

  • 10/16/2025 2:08 PM | Marla Halley (Administrator)


    As cybersecurity threats grow more sophisticated, the U.S. Department of Defense (DoD) has taken decisive action to protect sensitive data across its supply chain. The Cybersecurity Maturity Model Certification (CMMC) is now embedded. For organizations in the Defense Industrial Base (DIB), this is not just a regulatory shift—it’s a strategic imperative.

    Why CMMC Matters

    CMMC is a tiered certification framework designed to safeguard Controlled Unclassified Information (CUI) and Federal Contract Information (FCI). Whether you're a prime contractor or a subcontractor, if you handle either type of data, you must comply.

    The program includes three assessment levels:

    Level 1: Annual self-assessment for FCI.
    Level 2: Self or third-party assessment for CUI.
    Level 3: Government-led assessment for highly sensitive CUI.

    Why Compliance Is Urgent

    The final CMMC rule (32 CFR Part 170) took effect December 16, 2024, and the acquisition rule (48 CFR Part 204) becomes enforceable November 10, 2025. Non-compliance can result in:

    • Disqualification from DoD contracts.
    • Legal risks under the False Claims Act.
    • Reputational damage.

    Contractors must affirm continuous compliance in the Supplier Performance Risk System (SPRS), and all requirements flow down to subcontractors.

    Building Your Compliance Roadmap

    Achieving CMMC compliance is a journey and is not a point-in-time process. Breaking this workload down into actionable steps is critical to maintaining focus. Here’s a phased approach:

    1. Understand the Framework:

    • Familiarize yourself with CMMC’s structure, domains, and practices. Map requirements to NIST SP 800-171 controls, and clarify whether your organization handles FCI, CUI, or both.
    • Another critical element is to review cloud providers and other connected systems begin to identify shared responsibilities through a Share Responsibility / Customer Responsibility Matrix.

    2. Readiness Assessment:

    • Determine your required CMMC level. This can be done through a review of your current contracts or through a conversation with your contract officer.
    • Review your current policies, procedures, and technical configurations. Documentation is key in achieving and maintaining CMMC compliance.
    • Conduct a gap analysis to identify areas needing improvement. Engaging with professionals who can provide guidance and expertise is crucial to help identify true gaps and to align business processes

    3. Planning & Resourcing:

    • Develop a Plan of Action & Milestones (POA&M) to address gaps. This should be done at the objective level. This should also include prioritizing and budgeting for remediation.
    • Assign clear roles, define workflows, and identify necessary technology. Having a project manager or subject matter expert assigned to your compliance journey is essential.
    • Engage with certified experts and ensure internal ownership of compliance. The implementation of controls and objectives can be confusing. Having an expert that can give you advice and solutions will ensure that your interpretation of how you are meeting the controls does not cause you issue when it comes to an official assessment.

    4. Implementation:

    •  Update policies and procedures. Documentation is key in achieving compliance. Having clearly documented policies and procedures that address specific controls is necessary. Engaging with policy experts to ensure solid documentation is highly recommended.
    • “Document what you do, do what you document”
    • Enforce access controls. A key component of CMMC compliance is ensuring that only authorized users have access to the system and, furthermore, have access to CUI.
    • Deploy technical safeguards like encryption, a SIEM, MFA and endpoint protection.
    • Establish incident response and change control processes. Make sure that these processes are followed and that there is an audit trail so that the assessor can be provided with evidence.

    5. Continuous Monitoring:

    • Treat compliance as an ongoing effort. This includes documenting reviews, auditing processes, defining audit logs and audit review processes, and constantly ensuring that documentation is in line with implementation.
    • Use tools like SIEM and other alerting mechanisms to assist with audits of controls and objectives.  
    • Keep your POA&M updated as risks to your environment and compliance posture evolve.
    • Avoid superficial compliance and conduct mock assessments to uncover gaps.

    Preparing for the Assessment

    • Don’t just check boxes—tell a defensible story. Your System Security Plan (SSP), POA&M, and supporting documentation should clearly demonstrate how controls and objectives are implemented and enforced.
    • Use real-world examples to show how controls are implemented. Be prepared to guide the assessor through your implementation and compliance.
    • Conduct mock assessments to uncover gaps before the official evaluation. It is always a good to check with designated experts to be sure you are in alignment. Contracting with a C3PAO (Certified 3rd Party Assessment Organization) to conduct a mock assessment before your official assessment will allow for you to correct any known deficiencies before they are officially recorded.
    • Embed compliance into daily operations through automation and regular staff training. CMMC compliance is a culture shift for the entire organization.

    Real-World Lessons

    A case study from ProStratus highlights the value of a structured approach:

    • Conducting a thorough gap analysis and building a tailored POA&M.
    • Embedding compliance into daily operations and culture.
    • Ensuring that documented policies and procedures are clear, outline “actual” implementations and used throughout the organization.
    • Go into the assessment being able to prove all 110 controls and 320 objectives. You should not go into the assessment with a POAM.

    Common Pitfalls

    • Over-reliance on generic templates
    • Neglecting documentation
    • Lack of internal ownership
    • Treating compliance as a one-time project
    • Trying to complete this journey alone.

    Success Factors

    • Leadership buy-in. A C-Level champion is absolutely necessary for success.
    • Clear documentation that identifies addressed controls and objectives.
    • Proactive security culture that addresses ALL employees and avoids siloing security and compliance to a “team.”
    • Treating compliance as a strategic advantage. The amount of time and energy that is necessary for achieving CMMC Level 2 is enormous, but this is also an opportunity to set your organization apart from competitors and assure primes and officiating bodies that you are serious about protecting sensitive data.

    Bottom Line:
    CMMC compliance is not just a regulatory hurdle—it’s an opportunity to strengthen your organization’s security posture and stand out in the defense contracting space. Start early, build a culture of compliance, and leverage expert guidance to ensure success.

    ###

    About the Author

    ProStratus is a CMMC Level 2 certified managed security service provider, delivering secure IT solutions across the Defense Industrial Base. Thomas Saul is the Director of Security and Compliance for ProStratus and is a Certified CMMC Assessor (CCA) who specializes in helping organizations operationalize compliance and building cybersecurity into daily operations.

  • 10/16/2025 2:03 PM | Marla Halley (Administrator)

    Navigating the Shift to Smarter, Self-Running Experiences

    Customer experience, or CX, is the sum of every interaction a customer has with your brand, from the first app notification to the final thank-you email. It's not confined to call centers; it's the seamless thread weaving through every touchpoint in a business, shaping loyalty in an era where expectations soar. Imagine a world where these experiences don't just react to needs. They anticipate them, resolve hiccups before they arise, and evolve effortlessly without endless human intervention. That is the autonomous CX landscape on the horizon, where AI doesn't replace people but amplifies them, touching every corner of operations in retail, finance, healthcare, education, and beyond. As industries race toward this future, four foundational pillars—strategic vision, quality assurance, training rigor, and mechanical integration—stand out as the blueprint for success. This is not just theory. It is an unfolding story of transformation, from today's reactive support to tomorrow's predictive powerhouses across all customer-facing channels. Let's dive in, exploring how these pillars build resilient, engaging journeys that keep your audience hooked and your operations ahead.

    Pillar 1: Strategic Vision, Charting the Course Beyond the Budget

    Every great shift starts with a map. In the rush to AI, too many leaders fixate on tools and costs, missing the bigger picture: Where is your CX headed in an autonomous era? Strategic vision demands asking bold questions. How will AI evolve your client interactions across apps, in-store visits, and virtual consultations? What seamless experiences will set you apart by 2030?

    Picture customer hubs evolving from fragmented silos into dynamic ecosystems, where normalized, profiled information fuels multi-agent systems. Front-line AI handles routine queries in chatbots or kiosks, escalating to specialized "supervisors" that tap deeper insights, handing off to humans only when nuance calls for it. This is not about slashing expenses. It is about directional transformation, prioritizing long-term client loyalty over short-term wins in every business domain. Without this north star, deployments falter into chaos. Engage your teams by co-creating these roadmaps. Start with workshops that paint vivid "day-in-the-life" scenarios, turning abstract strategy into tangible excitement for non-technical staff and data-driven insights for technical experts.

    Pillar 2: Quality Assurance, The Glue Holding It All Together

    In an autonomous world, consistency is not optional. It is the heartbeat of trust. Quality assurance ensures every AI interaction feels polished, reliable, and human-touched, even when it is not, whether in a drive-thru order or a personalized email campaign. Think real-time coaching: Scripts, prompts, and oversight that mirror elite outsourcing teams, grading interactions on sentiment, flow, and resolution.

    Envision transcribing 100 percent of interactions to forge knowledge repositories, not just for compliance, but to train behaviors that delight across channels. In high-stakes sectors like finance or education, this pillar prevents drift. Tools for consistent reporting flag anomalies early, like a customer's frustration spiking mid-conversation on a mobile app. The payoff? Frictionless experiences that boost retention business-wide. To keep readers riveted, frame quality as a narrative hero. Share anonymized "before-and-after" stories in your internal comms, showing how one overlooked metric turned a complaint cascade into rave reviews, resonating with CXOs eyeing ROI and frontline teams craving simplicity.

    Pillar 3: Training Rigor, Building AI That Learns Like Humans

    Autonomy thrives on adaptability, and that is where training rigor shines. Gone are static models. Enter AI that ingests personas—style guides and prompts tailored for customer-facing finesse—while undergoing relentless coaching cycles. It is like raising a digital apprentice: Start with zero knowledge base, feed it transcribed dialogues, regional dialects (hello, Southern US inflections), and iterative feedback to refine accuracy in emails, chats, or voice assistants.

    This pillar powers the story's turning point: From clunky bots to intuitive agents that personalize on the fly, like suggesting "your usual sausage biscuit" based on geolocation and past orders during an in-app upsell. For resource-strapped teams, like nonprofits dodging DIY pitfalls, lean on accessible platforms for workflow training and agent licenses, bypassing unguided tools that promise quick fixes but deliver frustration. Make training engaging by gamifying it. Leaderboards for "best anomaly hunts" (spotting order errors via license plates) turn compliance into collaboration, preparing workforces for a job market craving self-motivated learners over rote specialists—appealing to technical builders and visionary leaders alike.

    Pillar 4: Mechanical Integration, The Engines of Seamless Automation

    No autonomous tale is complete without the machinery that makes it hum. Mechanical integration weaves robotics and edge tech into the fabric of CX, handling the grunt work so humans focus on magic, from warehouse fulfillment to personalized retail recommendations. Dual cameras spotting menu items with yes/no precision? Edge-localized machine learning slashing voice latency to milliseconds? Headset analytics canceling noise while monitoring volume trends? These are not gadgets. They are the plot devices propelling us forward.

    From burger-flipping arms streamlining prep to shelf-scanning bots enforcing planograms with image-code smarts, this pillar scales repetition into reliability across supply chains and service desks. Autonomous prototypes in quick-service spots run end-to-end robotic ops, while manufacturing cameras enforce glove checks for safety. Early costs are low, but watch for upticks as efficiencies compound, like cloud trends on steroids. Hook your audience with demos. Virtual tours of edge-powered point-of-sale systems surviving outages prove how mechanical muscle delivers outage-proof speed and sparks innovation across retail, healthcare tele-health, or beyond, bridging the gap for non-technical users with visuals and CXOs with scalability metrics.

    Weaving the Pillars into Your Autonomous Story

    These four pillars are not silos. They interlock to narrate a compelling arc: From data chaos to predictive bliss, reactive fixes to proactive delight in every customer touchpoint. High-level steps to get started? First, audit your data for strategic alignment. Second, pilot quality-focused transcriptions in one channel. Third, roll out persona training with regional tweaks. Fourth, integrate mechanical pilots for latency-sensitive tasks. Fifth, cycle through refinements, benchmarking against 5-year adoption curves.

    The risks? Deepfakes from mere minutes of media, fraud via unchecked access, or cost swings from unchecked scaling. Counter with multi-factor authentication, anomaly detection, and vigilant oversight. The reward? Unified channels yielding hyper-personalized, resilient CX that captivates customers and empowers teams, positioning every forward-thinking business for enduring success.

    As we edge toward this autonomous horizon, the question is not if, but how boldly you will lead the change. Dive deeper into these pillars to craft your organization's next chapter one where CX isn't a department, but the defining edge of your entire enterprise.

    ####

    About The Author

    Bill Magnuson is a seasoned leader in technology transformation, with a strong background in driving innovation, strategic growth, and operational excellence. He combines business acumen with tech expertise to help organizations modernize, scale sustainably, and deliver greater value to customers.

<< First  < Prev   1   2   3   4   5   ...   Next >  Last >> 

MEET OUR PARTNERS

Our Partners share a common goal: to connect, strengthen, and champion the technology community in our region. A Technology First Partner is an elite member leading the support, development, and expansion of Technology First services. In return, Partners improve community visibility and increase their revenue. Make a difference in our region and your business.

CHAMPION PARTNER

"The McCracken Group (TMG) is proud to be a Champion Partner of Technology First. We share a commitment to education, collaboration, and empowering technology professionals across our tech region. Together, guided by our core values, Doing the Right Thing, Always Learning, Building Strong Relationships, and Giving Back, we’re helping advance innovation and continuous growth across our region’s tech community."
Seth Marsh
Vice President Sales & Marketing, The McCracken Group
The McCracken Group

CORNERSTONE PARTNERS

© 2026 Technology First. All rights reserved.