Menu
Log in
Log in

Tech News Blog

Connect with TECH NEWS to discover emerging trends, the latest IT news and events, and enjoy concrete examples of why Technology First is the best connected IT community in the region.

Subscribe to our newsletter

<< First  < Prev   1   2   3   4   5   ...   Next >  Last >> 
  • 05/02/2026 12:51 PM | Marla Halley (Administrator)

    A Quick Story from the Field

    Not long ago, I met a budding tech professional who had done everything right on paper.

    They had completed a technical training program, earned industry-recognized certifications, and showed up to every opportunity ready to work. On paper, they looked like exactly the kind of candidate employers say they want.

    But interview after interview, they kept hearing the same thing:

    “We’re looking for someone with a bit more experience.”

    At some point, “more experience” becomes code for something else.

    So, we shifted the focus.

    We spent time not just reviewing technical knowledge but practicing how to talk through problems. How to explain what you don’t know yet. How to show curiosity and creativity instead of hesitation or fear. How to connect past experiences—even non-technical ones— to the role in front of you.

    In the next interview, something changed.

    When asked a question they didn’t know the answer to, they didn’t freeze. They walked through how they would approach solving it. They asked a clarifying question. They showed their thinking.

    They got the job.

    Same certifications. Same résumé.

    Different outcome.

    What changed wasn’t their technical ability, it was their ability to communicate it, navigate uncertainty, and show up with confidence.

    That’s the last mile.

    We keep hearing about the “tech talent shortage.”

    But from where I sit, the issue isn’t a lack of people—it’s a lack of readiness.

    Across industries, employers are struggling to fill roles. At the same time, thousands of motivated, trained individuals are working hard to break into tech. Somewhere between those two realities sits what I call the last-mile problem—the gap between learning the skills and thriving in the role.

    And here’s the part we don’t talk about enough:

    That gap isn’t just technical.

    It’s Not Always About More Training—It’s About Better Alignment

    Over the past decade, we’ve seen an explosion of technical training programs, certifications, and alternative pathways into tech. That’s a good thing. More doors have opened.

    But many employers are still hiring as if the only thing that matters is what’s on a résumé—specific tools, years of experience, or a degree.

    Meanwhile, what hiring managers often really want shows up in di erent ways:

    • Can this person communicate clearly with a team?
    • Can they troubleshoot under pressure?
    • Can they ask the right questions when they don’t know the answer?

    Those aren’t bullet points. Those are behaviors.

    And they’re often the difference between someone who gets hired… and someone who succeeds.

    The Hard Truth: “Entry-Level” Isn’t Entry-Level Anymore

    Let’s be honest—many so-called entry-level roles now require 2–3 years of experience, multiple certifications, and the ability (or expectation) to hit the ground running on Day One.

    That expectation creates a bottleneck.

    It leaves capable, motivated individuals stuck just outside the door—not because they lack potential, but because they haven’t had the chance to practice in a real-world environment.

    And it puts pressure on employers, who are searching for “perfect fits” in a market where those candidates are increasingly rare.

    Soft Skills Are the Real Last Mile

    Here’s what closes that gap:

    Professional skills. Human skills. The things that don’t always show up on a transcript.

    Adaptability. Communication. Accountability. Curiosity.

    In tech roles especially, where things change quickly and problems don’t come with step-by-step instructions, these skills matter just as much as technical knowledge.

    Someone can learn a new platform.

    It’s much harder to teach someone how to navigate ambiguity, collaborate with a team, or recover from a mistake with confidence.

    The most successful early-career professionals aren’t the ones who know everything—they’re the ones who know how to learn, connect, and persist.

    This Isn’t Just a Candidate Problem

    It’s easy to frame this as a talent issue. It’s not.

    The last-mile gap exists because training providers and employers are often operating on parallel tracks instead of in partnership.

    If we want to solve it, we need to meet in the middle:

    • Training programs must integrate real-world expectations and professional skill development
    • Employers must create on ramps that allow for growth, not just immediate perfection
    • Both sides must recognize that potential is just as valuable as experience

    Because the truth is—no one starts fully ready.

    Closing the Gap

    The organizations that figure this out will have a competitive advantage.

    They’ll build stronger pipelines, improve retention, and develop talent that grows with them—not just fits a job description on paper.

    And they’ll help reshape a system that, right now, is leaving too many capable people just one step away from opportunity.

    That’s the last mile.

    And it’s one worth investing in.

    Dr. Demarus Crawford-White is Executive Director of NPower Ohio, where she leads efforts to expand access to technology training and career pathways across the region. With more than 20 years of experience in higher education and workforce development, she is a strong advocate for equity, skills-based learning, and debt-free pathways into tech careers. Her work focuses on connecting untapped talent to in-demand opportunities and strengthening Ohio’s technology workforce.

  • 05/02/2026 12:29 PM | Marla Halley (Administrator)

    Many mid-market IT leaders face a persistent challenge: too much to manage, too few staff, and limited budgets. The problem is often structural and increasingly difficult to overcome.

    This is why more organizations are choosing co-managed IT services as a long-term strategic model, not just a stopgap. Here’s what the data reveals, the most common mistakes to avoid, and three key actions you can take to make this model succeed.

    The Talent Problem is for Real

    The staffing challenge in mid-market IT is not a short-term blip. Robert Half's 2025 report says 87% of tech leaders face challenges finding skilled workers. This problem is intensifying as demand outpaces the available talent pool, especially for cybersecurity, cloud, and AI skills.

    Specialization is also an issue. A typical mid-market IT department has four to eight people. They cover helpdesk support, server maintenance, network management, cybersecurity, cloud operations, and compliance. Large enterprises assign entire teams to each function. The gap is not just headcount, but expertise.

    Worse, filling specialized roles is a lengthy and expensive process. According to Robert Half's report, 87% of technology leaders say they have difficulty finding qualified candidates, and senior cybersecurity and cloud roles routinely sit open for 6 months or more before being filled. The compensation required to compete adds another layer of pressure: Robert Half's 2026 Salary Guide lists cybersecurity engineer salaries in the $118,500 to $190,750 range, well above most mid-market salary bands.

    Even a successful hire does not mean stability. Info-Tech Research Group's 2025 survey found that 42% of IT employees are looking for other opportunities. Mid-market organizations feel this retention risk more acutely, due to limited advancement paths.

    The Security Gap Remains Deeply Concerning

    Cybersecurity exposure in the mid-market is disproportionate and under-appreciated. According to 2025 data, 67% of mid-market organizations report moderate-to-critical cybersecurity skills gaps. And 86% experienced at least one cyber breach in 2024, with more than half attributing it to a lack of security skills or training.

    Mid-sized companies do not fly under the radar. Arctic Wolf's 2026 Threat Report shows that ransomware leak-site activity disproportionately affects small- and mid-market organizations, confirming that threat actors actively target companies less likely to have enterprise-grade defenses. The financial exposure is substantial. Arctic Wolf's 2025 Threat Report states that the median initial ransom demand reaches $600,000, and legal, government, retail, and energy organizations regularly receive demands of $1 million or more.

    Most mid-market firms cannot afford to build an in-house Security Operations Center. Providing basic 24/7 SOC coverage demands three to five dedicated security professionals, a SIEM, and threat intelligence subscriptions. Organizations rarely justify the economics until they reach 2,000 employees and $250 million in revenue.

    What Co-Managed IT Is (and Isn’t)

    Co-managed IT is not outsourcing. You do not hand over the keys and walk away. Internal IT teams keep control and make decisions. External providers supply operational coverage, special expertise, and advanced tools.

    This distinction is important. One big myth is that a co-managed partner makes your internal staff redundant. This is not true. Instead, your team will do less firefighting and spend more time on projects that drive your business forward. This is the beauty of co-managed services.

    Three Common Mistakes to Avoid

    Even when organizations commit to co-managed IT, implementation missteps can undermine the value. Watch for these:

    • Do not treat it like a vendor relationship. Co-managed success relies on documented business outcomes, not just SLAs. Define what "IT working well" means for your leaders. Revisit this definition as business priorities change.
    • Unchecked customization increases complexity and degrades service quality. The most reliable co-managed providers use standardized platforms. When clients demand exceptions at every turn, challenges arise. A sound governance framework defines approved flexibility (e.g., service scope, recovery objectives) and enforces non-negotiable standards (e.g., security baselines, monitoring platforms).
    • Internal IT staff may see co-managed partners as threats to their roles. Address this early by reframing the relationship as one that builds skills and career growth—not one that eliminates them. This approach is critical to adoption and long-term success.

    What the ROI Actually Looks Like

    The ROI on co-managed services depends on many factors. Based on my firm’s experience and industry data, here’s what organizations experience with co-managed IT:

    • Staffing cost reduction: Organizations typically see 30 to 45% reductions in IT staffing-related expenses compared to full in-house hiring models.
    • Business interruption costs $5,000 or more per minute for most mid-market firms. Proactive monitoring reduces downtime from hours to minutes.
    • Project velocity: Internal teams working alongside external providers on strategic initiatives can often complete projects 40-60% faster.
    • Organizations with strong SOC operations often cut breach dwell time to four hours or less. Those without managed security face dwell times of eighteen days or more.
    • Audit efficiency: Standardized compliance automation can reduce audit cycle timelines by 30-40%, eliminating costly last-minute remediation.

    Co-managed IT does not provide a silver bullet. You need the right provider, clear governance, and honest change management. But mid-market organizations that face enterprise-scale demands and limited internal resources can use co-managed IT to build operational resilience, improve security, and gain capacity to pursue strategic goals. Such benefits should prompt CIOs and other tech leaders to closely examine their co-managed options.

    Jesse Kegley is a co-founder and Chief Revenue Officer at Emerge. Jesse drives Go-to-Market strategies, always aligning with the company’s growth-oriented core values. He also serves or has served on advisory boards for Cisco, Microsoft, and Ingram Micro, providing valuable insights into industry trends and technological advancements.

  • 03/26/2026 2:05 PM | Marla Halley (Administrator)


    • I’m sure you have heard the trending news by now……the United States is experiencing a labor shortage that is affecting many industries like manufacturing, logistics, healthcare, transportation and agriculture[1].  This news has been reported in regional newspapers, business magazines as well as trade journals. These industries should sound very familiar to you as they are all very much in demand in most markets across the country.  What is causing this shortage?  What can employers do about the shortage? The cause is a result of several different factors like our nation’s aging population, low unemployment, people opting out of the workforce, working conditions, skills gaps and the school to job pipeline.  There is good news in many of these causative factors.  Employers across the country still have a lot of control with many of the factors leading to shortages, and they can grab the bull by the horns with robust workforce planning for their company.

      Workforce Planning Primer

      The Society for Human Resources Management advocates for all companies to practice the art of workforce planning[2]. There is no specific formula for assisting with workforce planning because every company has very different needs, based on their culture, their strategic plan and their goals and objectives.  With this said, your company can develop a workforce plan that can assist in guiding your company to a robust workforce pipeline that will better position your company for accomplishing your goals and objectives. The steps in the planning process are very straightforward:

    1.     Conduct a supply analysis.
    2.     Conduct a demand analysis.
    3.     Conduct a gap analysis.
    4.     Formulate a solution (plan).

    The purpose of the supply analysis is to ensure your company knows the regional and national suppliers of potential employees for your company. This analysis should look at the numbers available and skills and should also look at demographics to include generational representation. This analysis will help your company with projections for the future based upon retirements and resignations, and it can also assist in examining different types of workforce that can be tapped into (e.g. new graduates, upskilling of current employees, veterans, people with disabilities, older adults, reentered citizens, and new Americans). Through the examination and utilization of all of these categories, you can create a very resilient workforce for your company.

    The demand analysis will help you to design your future workforce composition. This is where your strategic plan will come in handy to ensure you are recruiting your required skillsets in response to your new and expanded business.  This is the analysis where you are asking yourself about the skillsets and experience that you need in your company in all of your positions. You’ll want to review your job descriptions very closely to ensure they are accurate and offer your company ultimate flexibility. 

    The gap analysis makes a comparison between what you said you need in your demand analysis and what is available through your supply analysis. 

    Last but not least is your solution analysis.  Your solution should include how you plan to recruit to meet your needs, as well as plans for upskilling current employees and potentially tapping into new resources that maybe your company has never tapped into in the past.

    Easy to find Labor Resource

    One readily available resource in every market is students at all levels of education. High School through Graduate Students across the country are looking for work-based learning opportunities that will help them to discern their appropriate career choices. In Ohio, we are really lucky to have about 400K High School students in our State and close to 400K College and University students.  Imagine if we could keep the vast majority of these students in our State.  Our labor shortage in Ohio would be long gone! 

    Cassie is President of The Strategic Ohio Council for Higher Education (SOCHE), a non-profit organization that specializes in assisting employers to recruit, train, employ, develop, and manage interns. After 59 years in business, SOCHE knows that today’s interns are tomorrow’s workforce!  If you’re looking for talent, let SOCHE help connect you with your next generation of workforce.  Reach out to soche@soche.org or www.soche.org.

    [1] https://www.uschamber.com/workforce/understanding-americas-labor-shortage

    [2] https://www.shrm.org/topics-tools/tools/toolkits/practicing-discipline-workforce-planning


  • 03/01/2026 11:25 AM | Marla Halley (Administrator)

    • AI is no longer experimental. According to the 2026 Software Lifecycle Engineering Decision Maker Survey, 76.6% of organizations are actively using AI in development workflows, with another 20.4% evaluating its implementation. Only 3.1% remain disengaged. [ 1]

      Automation and AI have reshaped how we build, deploy, and protect software. From speeding up code delivery to enhancing threat detection, these systems promise speed, consistency, and scale. But automation isn’t infallible—and when it goes wrong, because it has, the consequences ripple across entire industries. This raises an important question: who is accountable when automated systems fail, and how should we rethink risk transfer in response?

      When Automation Fails at Scale

      In July 2024, a routine security update triggered a global technology outage that left millions of Windows machines unusable overnight. A flawed configuration update for the widely deployed endpoint security agent caused systems to crash into boot loops, disrupting airlines, hospitals, broadcasters, banking systems, and emergency services.

      The root cause? A bug in the internal validation process—a tool meant to ensure updates were safe. Instead, it mistakenly allowed a defective update to reach customers’ systems. This wasn’t a cyberattack. It was a failure of automated testing and quality assurance.

      The fallout was vast even with a swift response. While many systems were restored within days, the financial toll on individual organizations was significant. Delta Air Lines alone claimed hundreds of millions of dollars in losses due to canceled flights and operational chaos. The scale of disruption underscores a fundamental truth: automation amplifies both benefits and failures.

      The Illusion of Infallible Automation

      Automation is often sold as a panacea. It promises faster releases, fewer human errors, and continuous delivery at scale. But experts have noted that automated validation systems are still software, and therefore still prone to defects. As one analysis observed after the outage, automation tools can miss edge cases or malformed data exactly because they operate within predefined assumptions.

      In the 2024 incident, the content validator allowed a defective configuration file to pass through because its own logic failed to spot the mismatch. This gap illustrates an important point: automation inherits the limitations and blind spots of its creators and its design. No matter how sophisticated, automated testing can only check for what it’s designed to anticipate.
      Equally consequential is the practice of deploying updates globally without staged rollouts or “canary” testing that limits blast radius. Had the faulty update been deployed to a small subset first, the outage might have been contained before it became global.

      Accountability in a Fragmented Risk Landscape

      Traditionally, software vendors deliver products with liability clauses that limit financial exposure. Customers bear much of the operational risk when something goes wrong. This model assumes vendors won’t be at fault too often, and that organizations will manage their own risk through internal controls, testing environments, and contingency planning.

      As dependency on thirdparty tools increases across businesses, the lines of accountability grow blurrier, and ecosystem risk grows.

      From Insurance to Guarantees: Shifting Risk Transfer Models

      One emerging response in the cybersecurity ecosystem is the integration of financial risk transfer mechanisms alongside technical tools. Traditional cyber insurance policies have long been used to shift risk—covering costs associated with breaches, ransomware attacks, and business interruptions. These operate reactively and often exclude systemic or vendor-related failures.

      In contrast, some companies have begun offering guarantees or warranties backed by insurance that tie performance outcomes to financial protection. For example, one deep learning-based cybersecurity provider teamed with an insurer to offer a performance guarantee with ransomware warranty coverage up to millions of dollars—signaling confidence in both product effectiveness and risk mitigation.

      Similarly, cyber warranty programs embedded with solutions now exist where customers receive financial backstop in the event of qualifying incidents. This helps to cover forensic costs, legal fees, or response activities.

      These approaches represent a shift from purely technical performance to outcome-based assurances, essentially placing some level of financial accountability on the provider when specific guarantees aren’t met.

      Why This Matters Now

      The cyber insurance market itself reflects a changing risk calculus. Claims have surged, particularly around ransomware and third-party failures, pushing insurers to tighten underwriting and scrutinize vendor risk more closely. In some cases, insurers now demand continuous monitoring and proactive threat mitigation as prerequisites for coverage.

      Meanwhile, hybrid models that blend warranty and insurance help bridge gaps between technical defense tools and financial resilience. They push organizations—and the vendors they work with—to think beyond feature checklists toward shared accountability for outcomes.

      A Framework for Shared Accountability

      As digital systems continue to grow in complexity and interdependence, organizations need a framework that acknowledges both technical and financial aspects of risk.

          More granular vendor commitments increase trust in product performance.

          Integrated risk transfer ensures incidents don’t derail business continuity.

          Retaining human oversight ensures automation enhances judgment.

          Continuous feedback turns failures into systemic improvement.

    Bridging Promise and Trust

    Automation and AI have immense potential to drive efficiency and scale, but their unchecked use can mask latent risks. As the industry evolves, true accountability will come from aligning technical performance with shared financial responsibility and risk management frameworks.

    Closing the accountability gap isn’t about eliminating automation. It’s about designing systems, contracts, and risk policies that recognize the shared stakes of all parties involved—vendors, customers, insurers, and regulators alike.

    About the Author:

    Michael Benzinger, Vice President, Director of Engineering for Cardre Information Security.

    [1] https://futurumgroup.com/press-release/ai-reaches-97-of-software-development-organizations/?utm_source=chatgpt.com


  • 02/26/2026 3:42 PM | Marla Halley (Administrator)

    Manufacturing environments are becoming more automated, more connected, and more complex than ever before. While this progress unlocks efficiency and productivity, it also introduces new vulnerabilities. When something goes wrong on the plant floor, the speed at which you recover can mean the difference between a minor disruption and a costly production shutdown.

    Common Problems That Disrupt Production

    1. Downtime caused by missing or outdated program backups
    A machine goes down unexpectedly. The maintenance team investigates and discovers the PLC program has been modified at some point—but no one knows when, why, or by whom. Worse, the only backup available is months old or incomplete. Production stops while engineers scramble to rebuild or recover the correct version. What should have been a quick fix turns into hours—or even days—of lost output.

    2. Untracked PLC or robot changes
    On a busy shop floor, multiple technicians, engineers, and contractors may access control systems. Without a centralized way to track changes, small adjustments can create big problems. A line that ran perfectly yesterday suddenly behaves unpredictably today. Without an audit trail, troubleshooting becomes guesswork.

    3. Audit and compliance challenges
    Whether driven by internal standards, customer requirements, or regulatory bodies, many manufacturers must demonstrate control over their automation assets. When documentation is scattered across personal laptops, USB drives, or outdated servers, preparing for an audit becomes stressful, time-consuming, and risky.

    4. Knowledge loss due to employee turnover
    Experienced technicians often carry critical knowledge about machine configurations and program changes. When they leave, retire, or change roles, that knowledge can disappear with them. The next time an issue occurs, teams may struggle to understand how systems were configured or why certain changes were made.

    These scenarios are more common than many organizations would like to admit. The good news is that they are also preventable.

    Building Resilience with Octoplant

    Resilient manufacturing operations are not just about preventing downtime—they are about recovering quickly and confidently when issues occur. This is where Octoplant plays a critical role.

    Octoplant is designed to centralize, secure, and manage automation assets across the entire plant. Instead of relying on scattered backups and manual documentation, manufacturers gain a single source of truth for their control programs and configurations.

    Here’s how Octoplant helps companies improve resilience and reduce time to recovery.

    1. Automated, reliable backups
    Octoplant performs regularly scheduled backups of PLCs, robots, HMIs, and other automation devices. This ensures that the most recent, validated version of each program is always available. When a failure occurs, maintenance teams can quickly restore the correct version—eliminating guesswork and reducing downtime.

    2. Full change tracking and audit trails
    Every change to a control program is tracked. Teams can see who made the change, when it happened, and what was modified. This visibility simplifies troubleshooting and ensures accountability across the organization.

    3. Centralized version control
    With Octoplant, all automation programs are stored in a centralized, structured repository. Instead of searching through multiple laptops or network folders, engineers can instantly access the latest approved version. This reduces the risk of loading outdated or incorrect programs during recovery.

    4. Faster troubleshooting and recovery
    When something goes wrong, time is critical. Octoplant provides immediate insight into program differences, recent changes, and system status. Maintenance teams can quickly identify the root cause and restore operations—often in minutes instead of hours.

    5. Knowledge retention across the workforce
    By documenting changes and storing programs centrally, Octoplant captures institutional knowledge. Even when employees leave or retire, the system retains the history and context needed to keep operations running smoothly.

    From Reactive to Resilient

    In modern manufacturing, disruptions are inevitable. Equipment fails, changes are made, and people come and go. The difference between a fragile operation and a resilient one lies in preparation, visibility, and control.

    Octoplant gives manufacturers the tools they need to move from reactive troubleshooting to proactive resilience. By ensuring that every automation asset is backed up, tracked, and recoverable, companies can protect production, reduce risk, and maintain confidence in their operations.

    When downtime strikes, resilience isn’t just about getting back online—it’s about how fast and how confidently you can do it. With Octoplant, recovery becomes a controlled, predictable process.

    About the author:

    Mike Rolfes is an account manager at ATR Automation.  ATR Automation has been providing industrial automation and electrical engineering solutions since 1956.  As we have grown with the development of our community over the last 60 years, we have become the trusted name for industrial automation software in the region. 

    To learn more about Octoplant, please reach out to ATR Automation.  I can be reached at michaelrolfes@atrautomation.com or (513) 353-1800 ext. 5037.


  • 01/26/2026 3:49 PM | Marla Halley (Administrator)

    As organizations rapidly move applications and data to cloud platforms, cloud identity providers have replaced the network perimeter as the primary security boundary. Compromising a single account can provide broad access, making identity one of the highest-value targets for attackers.

    Multi-factor authentication (MFA) was once the most effective defense against account takeover. Today, it remains necessary—but it is no longer sufficient without additional steps.

    What MFA Was Built to Prevent

    Traditional phishing attacks focused on stealing credentials. Users were tricked into entering a username and password into a fake website, which attackers then reused to log in to the real service.

    MFA disrupted this model. Even with stolen credentials, attackers could not complete authentication without access to the second factor. For years, this significantly reduced phishing-related compromises.

    That protection assumed attackers were outside the authentication flow. Modern attacks no longer operate under that assumption.

    How Adversary-in-the-Middle Attacks Bypass MFA

    Adversary-in-the-Middle (AiTM) phishing shifts the attack from credential theft to session theft.

    Instead of sending users to a fake login page, attackers proxy the real sign-in experience. The victim authenticates to the legitimate service and completes MFA normally. Behind the scenes, the attacker relays all traffic and captures the resulting session token.

    Session tokens prove that authentication has already occurred. Once issued, they allow access without requiring the password or MFA again. If an attacker steals the token, MFA is effectively bypassed.

    A Typical AiTM Attack Flow

    1. The user receives a phishing email designed to create urgency.
    2. Clicking the link routes the user through attacker-controlled infrastructure.
    3. The attacker proxies the real login service.
    4. The user enters credentials and completes MFA.
    5. The identity provider issues a session token.
    6. The attacker captures and replays the token to access the account.

    From the identity provider’s perspective, the attacker’s session is valid. Authentication already succeeded.

    Why Traditional MFA Falls Short

    Most MFA methods—SMS codes, authenticator apps, and push approvals—can be relayed in real time. AiTM attacks exploit this by forwarding challenges and responses between the victim and the real service.

    Because the session token is issued after MFA is completed, MFA alone does not prevent token theft or reuse. Defending against AiTM requires controls that either prevent token capture or limit token usability.

    Controls That Actually Reduce Risk

    Phish-Resistant Authentication

    FIDO2 security keys, passkeys, and certificate-based authentication are resistant to relay attacks. These methods cryptographically bind authentication to the legitimate service and cannot be replayed through a proxy.

    Device-Based Access Controls
    Requiring trusted devices adds a second enforcement layer. During the login process, the identity provider does an additional check to validate it is a trusted device and not an attacker’s proxy server.

    Session Token Protection
    Short session lifetimes, token binding, and continuous access evaluation reduce the value of stolen tokens and limit attacker dwell time.

    Continuous Detection
    Identity Threat Detection and Response (ITDR) tools identify anomalous behavior such as unfamiliar devices or impossible travel, enabling rapid containment when prevention fails.

    Conclusion

    MFA is no longer a complete defense against modern identity attacks. Adversary-in-the-Middle demonstrates that attackers can bypass authentication by stealing sessions instead of credentials.

    Effective identity security requires layered controls that reflect how attacks occur: phish-resistant authentication, device trust, hardened sessions, and continuous monitoring.

    Identity is now the perimeter. Defending it requires more than a second factor.

    About the Author

    Chaim Black is a Cyber Security Manager at Intrust IT. He is focused on delivering resilient security operations. He leads day-to-day security team execution while strengthening internal security posture and compliance. Chaim also serves as President of InfraGard Cincinnati, part of the FBI-private sector partnership advancing information sharing and cyber risk awareness.

  • 01/26/2026 3:16 PM | Marla Halley (Administrator)

    The shift in offensive operations over the last 18 months is unlike anything the industry has seen before. AI isn't coming for defenders, it's already here. And to make things worse, attackers are using it to outpace traditional security controls at a rate that should concern everyone.

    Here's the reality: signature-based detection was always playing catch-up. It works by recognizing things that have already been seen; file hashes, known-bad strings, IOCs pulled from last month's incident. That model assumes attackers are reusing tools and infrastructure. They're not. Not anymore.

    Polymorphism at Scale

    Polymorphic malware isn't new. What's new is how trivially easy AI makes it to generate variants. A red team operator can take a loader, feed it through an LLM-assisted obfuscation pipeline, and produce hundreds of unique builds that share zero static indicators. Different hashes, different string tables, different control flow. Same capability.

    From an offensive perspective, this changes engagement dynamics completely. Payload development and evasion used to consume significant amounts of time. Now, generating AV-bypassing variants is almost a commodity task. If authorized red teams can do it with limited resources, assume actual threat actors, with more time, more money, and no rules of engagement, are doing it better.

    The tooling exists to test payloads against defender solutions in automated loops. Spin up a sandbox, drop the payload, check detection, mutate, repeat. Iterate until clean. That's not theoretical, it's how modern offensive tooling development works.

    Why Behavioral Detection Has to Be the Focus

    If static indicators are unreliable, what's left? Behavior.

    Malware can change its code, but it still must do something. It needs to establish persistence, move laterally, touch credentials, call home. Those actions leave traces that are harder to obfuscate than a file hash.

    Competent defenders should be watching for:

    • Process lineage that doesn't make sense (Word spawning PowerShell spawning cmd.exe)
    • Authentication patterns that deviate from baseline (service accounts logging in interactively, lateral movement spikes)
    • Memory behaviors associated with injection techniques
    • Network traffic that violates expected protocol norms

    Good detection engineering focuses on these patterns, not on "did we see this exact hash before." The best blue teams aren't hunting for tools, they're hunting for tradecraft.

    IOCs Need to Get Smarter

    Most IOC feeds are noise. A hash gets burned within hours. A C2 domain is useful until the next rotation. If a detection strategy depends on someone else seeing the attack first and publishing indicators, it's always behind.

    The IOCs worth investing in are behavioral: specific API call sequences, registry key patterns associated with persistence mechanisms, authentication anomalies, protocol misuse. These tie to what the attacker is trying to accomplish, not what tool they happen to be using today. That's the important distinction.

    Anyone building custom offensive tooling knows that changing source code is easy. Changing objectives is not. Credential access is still required. Lateral movement is still required. Exfiltration is still required. Detect those actions, and the operator gets caught regardless of what the payload looks like.

    AI Works Both Ways

    Defenders have access to the same technology. Machine learning models that baseline normal environment behavior and flag deviations are genuinely useful when tuned properly and fed good telemetry. The challenge is operationalizing them without drowning in false positives.

    The environments that cause the most problems during offensive engagements are the ones with mature detection engineering programs. They're correlating endpoint telemetry with identity logs and network traffic in near real-time. They're running adversary simulations that mirror actual attacker behavior, not checkbox compliance exercises. They're hunting proactively instead of waiting for alerts.

    The Uncomfortable Truth

    Prevention won't stop every breach. That's not defeatism, it's operational reality. Attackers only need to be right once. Defenders need to be right constantly.

    The goal isn't perfection. The goal is making attacker operations expensive, noisy, and slow enough that detection happens before objectives are achieved. That means investing in detection engineering, building response capabilities that actually work under pressure, and accepting that security stacks will fail at some point.

    AI is making attacks cheaper and faster to produce. The response isn't more signatures; it's better detection of the behaviors that signatures can't catch.

    Author:

    Anthony Cihan is the Senior Principal Cybersecurity Engineer at Obviam where he leads offensive security operations and security assessments. He holds a BS is Cybersecurity and Information Assurance, the OSCP and OSWP, and has published multiple offensive security tools such as the PiSquirrel wiretap/implant and the Spellbinder SLAAC based IPv6 attack tool.


  • 12/23/2025 10:24 AM | Marla Halley (Administrator)

    The technology landscape is shifting at an unprecedented pace, driven primarily by the rapid maturity of Artificial Intelligence. For tech leaders—CIOs, CTOs, and CISOs—2026 isn't just another year; it's a pivotal moment to move from experimentation to enterprise-grade execution. Success will be defined not by the technology you adopt, but by how strategically and responsibly you embed it at the core of your business.

    Here are my top five priorities that will define the winners in 2026 and beyond.

    1. Establish Comprehensive AI Governance and Ethics

    AI is no longer a fringe tool; it's becoming the operational fabric of the enterprise. This widespread adoption, especially of Generative AI and autonomous agents, elevates the need for robust governance.

    Leaders must prioritize building a comprehensive AI governance framework that moves from policy to operation. This framework is essential for managing risk, ensuring compliance, and building customer trust. Key actions include:

    • Define Responsible Use: Implement clear, regularly updated policies for how employees can and cannot use AI tools, with a focus on data privacy and intellectual property.
    • Ensure Data Provenance: As AI models rely on vast datasets, establishing digital provenance, proving that your data and AI outputs are genuine, traceable, and compliant is critical.
    • Build-in Transparency: Design AI agents that can document and explain their decisions, allowing for essential "human-in-the-loop" review and accountability, especially in high-risk applications like hiring or customer service. 

    2. Modernize Infrastructure for an AI-Native Future

    The existing IT infrastructure, often burdened by years of technical debt, cannot support the demands of AI on a scale. AI models require massive compute power, high-speed data pipelines, and a flexible, low-latency environment.

    A core priority for 2026 must be the modernization of systems and the transition to an AI-native platform. This means:

    • Cloud Foundation: Doubling down on a full-stack, cloud-first approach that provides the necessary scalability, agility, and specialized AI supercomputing platforms.
    • Data Readiness: Creating a robust "data factory" with strong data governance to ensure the quality, security, and interoperability of the data that feeds your AI models.
    • Edge Computing: Leveraging edge computing capabilities, often via IoT, to process AI-driven data closer to where it's generated (e.g., manufacturing floors, smart cities) for real-time decision-making.

    3. Elevate Cybersecurity to Preemptive Resilience

    With AI-powered attacks becoming faster and more sophisticated, standard perimeter defense is insufficient. Cybersecurity is no longer an IT operational task; it's a board-level risk concern.

    Tech leaders must shift their focus to preemptive cybersecurity and a culture of resilience:

    • Zero-Trust Security: Fully implementing a zero-trust model across the organization, which assumes no user or device is trusted by default, minimizing the risk of internal breaches.
    • AI-Driven Defense: Utilizing AI security platforms for proactive threat detection, anomaly scoring, and automated incident response to combat AI-enhanced reconnaissance and supply-chain attacks.
    • Upskill Every Employee: Cybersecurity remains a human problem. Prioritize company-wide, continuous training that focuses on phishing, identity management, and the risks associated with deepfakes and synthetic content.

    4. Redesign the Workforce for Human-AI Collaboration

    The conversation around AI is shifting from job displacement to workforce transformation. Successful leaders will recognize that the competitive advantage lies in creating human-AI hybrid teams.

    The priority here is to cultivate the human skills that AI cannot replace and redefine roles for the new era:

    • Reskilling and Upskilling: Make continuous learning a strategic imperative, training employees in data fluency, AI implementation, and prompt engineering. The most valuable professionals will blend technical AI fluency with critical human skills like creativity, emotional intelligence, and long-term strategic thinking.
    • New Roles and Career Paths: Establish new career pathways for roles that manage, monitor, and design AI systems, such as AI Ethics Officers and Agent Orchestrators.
    • Focus on Human Judgment: Use AI to eliminate mundane tasks, freeing human workers to focus on high-value activities that require complex judgment, empathy, and strategic decision-making.

    5. Drive the Shift to Composable and Adaptable Architectures

    In a world defined by rapid change and intense competition, the traditional, monolithic application structure is a liability. Large, interconnected systems are slow to update, difficult to integrate with emerging AI capabilities, and prevent the business from responding quickly to market demands.

    Tech leaders must make the strategic priority a shift toward a composable enterprise built on modular, adaptable systems. This approach emphasizes flexibility, speed, and reuse:

    • Adopt Modular Architectures: Prioritize the full transition to microservices, containerization, and API-first design. This allows developers to quickly assemble and disassemble business capabilities (e.g., payment processing, customer login) as market conditions or new AI tools require.
    • Invest in Integration Fabric: Deploy a modern, robust integration layer (like an event mesh or sophisticated API gateway) that allows data and services to flow seamlessly between core legacy systems, cloud-native applications, and third-party vendor platforms. This is the glue that enables true agility.
    • Empower Fusion Teams: Move away from siloed IT and business units. Establish cross-functional "fusion teams" that blend business experts with low-code/no-code developers. These teams can rapidly assemble existing components to create tailored applications without waiting for lengthy, centralized IT development cycles.

    About the Author

    Parag Pujari is the Chief Information Officer (CIO) of Jurgensen Companies, where he oversees all technologies, IT strategy, IT operations and drives digital transformation initiatives to enhance business performance and efficiency. Parag has a distinguished background in IT leadership, specializing in areas such as cloud computing, ERP, cybersecurity, and enterprise architecture. Parag is crucial in aligning Jurgensen Companies’ technological capabilities with its long-term strategic business goals.

  • 12/23/2025 10:04 AM | Marla Halley (Administrator)

    I hire technology leaders for a living, so I do a lot of interviews. There are some attitudes that consistently raise red flags. These are behaviors to be cautious about in professional collaborations of any sort. Watch out for them when vetting vendors, negotiating partnerships, or considering a new job. And most importantly, avoid these behaviors yourself.

    Speaking poorly of former colleagues, partners, or customers

    This is perhaps the most common and damaging one I encounter. When a candidate casually disparages a previous boss as "incompetent" or a former team as "lazy," it immediately sets off alarms. First, anyone who gossips will gossip about you. If they're willing to breach confidentiality or loyalty with past relationships, what's to stop them from doing the same when they move on from your organization?

    Trust is foundational in tech leadership—sharing sensitive strategies, handling team dynamics, or collaborating on high-stakes projects all require discretion. A level of confidentiality is assumed. We need to feel safe to be imperfect. In innovative environments, mistakes happen as part of experimentation. Badmouthing absent parties erodes psychological safety; it signals that errors will be weaponized rather than learned from.

    Nobody can see the whole picture. Deference to the unknown is a sign of maturity. Perhaps the former colleague had unseen constraints—resource limitations, personal challenges, or higher-level directives. Mature professionals show humility by withholding judgment, opting instead for curiosity: "I wondered if there might have been factors I wasn't aware of."

    Blaming others for failures

    Flag number two is blaming. Externalizing failures when addressing a setback lacks nuance and appreciation of complexity. Tech ecosystems are intricate delays often stem from interdependent factors like ambiguous requirements, shifting priorities, or uncontrollable dependencies. Leaders who oversimplify by pointing fingers miss the systemic view needed for effective problem-solving.

    Accountability is non-negotiable in leadership. Owning outcomes, even when not directly at fault, demonstrates integrity. Blamers often avoid reflection: "What could I have done differently to mitigate this?" Blame reveals a missed growth opportunity. Those who blame others stagnate, while reflective leaders evolve.

    Casting a lost promotion or reduced scope as a betrayal

    This last one is a little more obscure, but I still hear it regularly. It emerges when discussing reasons for leaving a role. Candidates will frame being pushed to a smaller team, budget cuts, or shifted responsibilities as personal victimization—"They promised me X and then pulled the rug out."

    This kind of attitude reveals entitlement over adaptability. In dynamic tech landscapes, scopes evolve due to market shifts, funding rounds, or pivots. Resilient leaders view these as realities to navigate, not betrayals to resent. Even if it does hurt to be trusted with less responsibility, taking it as an attack reflects poor emotional regulation. Reacting with bitterness suggests difficulty handling ambiguity or disappointment gracefully—qualities essential for leading through uncertainty.

    A victim mindset fosters resentment, reducing willingness to invest in the team's success when conditions aren't ideal.

    These red flags aren't about perfection—no one has a flawless history. They're about patterns of immaturity: low self-control, ego-driven responses, and combativeness over curiosity. In contrast, professionals who earn trust speak with goodwill, own their part, appreciate complexity, and adapt without grievance.

    As you build your network—whether hiring, partnering, or job-seeking—pay attention to these signals. They imply how someone handles conflict, uncertainty, and relationships. And that awareness cuts both ways: self-reflect to avoid exhibiting them yourself. Practice pausing before critiquing absent parties; frame past experiences with ownership and nuance; view changes as opportunities rather than injustices.

    In leadership, especially technology, trust compounds success. Spot these flags early, in yourself and others, and steer toward collaborations that build it rather than erode it.

    About the Author:

    Aaron Davis is a seasoned leader and talent acquisition expert. With a career spanning over two decades, Aaron has built and led successful teams across various industries, including tech staffing, software development, healthcare, & real estate investment. He founded Reliant Search Group in 2019 and still enjoys connecting business leaders with critical talent. Aaron hosts the "Being Built" podcast, where he shares insights on business growth and leadership.

  • 11/28/2025 8:32 AM | Marla Halley (Administrator)

    Cybersecurity in 2026 is at the center of digital transformation. AI-driven threats, expanding attack surfaces, and global regulatory shifts are rewriting the rules of risk management. Leaders who understand these dynamics will shape organizations that thrive in a world where security is inseparable from innovation. These five trends highlight the changes shaping cybersecurity and why acting today sets the stage for long-term growth.

    1. AI: The Double-Edged Sword

    Artificial intelligence has become a pivotal force in both offensive and defensive cybersecurity operations. Threat actors are increasingly leveraging generative AI to craft highly convincing phishing campaigns and other social engineering attacks at scale. According to SentinelOne’s 2025 report, phishing attacks surged by 1,265% year-over-year, largely driven by the adoption of GenAI in attack workflows. In response, defensive AI systems are employing behavioral analytics and predictive modeling to detect anomalies and mitigate threats in real time, aiming to counter the growing sophistication and volume of AI-enabled attacks.

    The implications extend far beyond phishing. Gartner predicts that by 2027, AI agents will reduce the time it takes to exploit account exposures by 50%, dramatically increasing the speed and scale of credential theft and account takeover attacks. This trend highlights a critical shift toward automation in cybercrime, forcing organizations to rethink response strategies and invest in adaptive security models that can keep pace with evolving threats. Organizations that fail to anticipate this shift risk facing attacks that surpass traditional defenses, leaving critical systems exposed in a matter of minutes.

    2. The Rise of Zero Trust Architecture

    Zero Trust Architecture (ZTA) has transitioned from conceptual to operational, now embedded across critical sectors like finance, healthcare, and government. It mandates verification of every access request, independent of origin or device. Micro segmentation and continuous authentication are considered foundational practices. Gartner predicts that by 2026, 10% of large enterprises will have a mature and measurable Zero Trust program in place. This trend highlights the growing focus on building resilient security frameworks to counter evolving cyber threats.

    3. Rising Risks in Operational Technology

    The rapid expansion of connected Operational Technology (OT) devices is introducing new vulnerabilities across enterprise and industrial environments. These systems, which control critical processes, are increasingly interconnected, making them attractive targets for cyberattacks. To reduce risk and maintain operational continuity, security teams are prioritizing measures such as firmware integrity checks and network segmentation.

    Large-scale environments like smart cities and industrial systems face heightened exposure because of the sheer number and diversity of connected devices. According to IBM’s Cost of a Data Breach Report, the impact is significant: in 2025, 15% of organizations experienced OT-related breaches, and nearly a quarter of those incidents caused direct damage to OT systems or equipment, with an average cost of $4.56 million per breach.

    This expanding attack surface demands a shift toward asset-centric security models and real-time monitoring to prevent lateral movement and supply chain compromise.

    4. Endpoint Detection and Response: The Frontline of Cyber Defense

    In many cases, endpoints serve as the most accessible target for attackers. In a world of hybrid work and distributed networks, attackers often target laptops, mobile devices, and other endpoints as their primary entry point. Traditional antivirus tools, designed to detect known signatures, cannot keep up with advanced threats such as fileless malware, credential theft, and AI-driven exploits.

    EDR takes a proactive approach by continuously collecting and analyzing data from every endpoint on the network, including processes, performance metrics, network connections, and user behaviors. By storing this data in a centralized cloud-based system, EDR enables security teams to identify anomalies quickly and respond before attackers can move deeper into the network. When a threat is detected, EDR can immediately isolate the compromised device, preventing further spread and minimizing impact. IBM research shows that 90 percent of cyberattacks and 70 percent of breaches originate at endpoint devices, making robust monitoring and response capabilities a top priority. Organizations that rely solely on traditional antivirus remain vulnerable to modern attack techniques. To maintain resilience and respond quickly to threats, EDR should be a core component of every security strategy.

    5. Preparing for the Quantum Era

    Post-Quantum Cryptography (PQC) introduces cryptographic algorithms designed to withstand the computational power of quantum computers, which threaten to break traditional encryption methods like RSA and ECC. Instead of relying on current mathematical problems vulnerable to quantum attacks, PQC uses lattice-based, hash-based, and multivariate polynomial schemes that remain secure even in a quantum-driven world.

    The urgency for PQC adoption is growing as organizations recognize the long-term risk of “harvest now, decrypt later” attacks. Sensitive data encrypted today could be compromised in the future when quantum computing becomes mainstream. Gartner predicts that by 2029, advances in quantum computing will render applications, data, and networks protected by asymmetric cryptography unsafe, and by 2034, these methods will be fully breakable. Similarly, a Forbes Technology Council report highlights that quantum computing is now considered a top emerging cybersecurity threat, prompting U.S. policymakers to push for immediate preparation across both government and industry.

    PQC allows organizations to strengthen their encryption for the future while maintaining efficiency and compatibility with existing systems. By integrating quantum-safe algorithms into existing systems, businesses can maintain compliance, secure cloud environments, and protect IoT ecosystems against next-generation threats. This shift transforms cryptography from a static safeguard into a resilient, adaptive defense for the quantum era.

    Conclusion

    Cybersecurity in 2026 is about staying ahead of threats before they emerge. AI-powered defenses, Zero Trust principles, and quantum-resistant cryptography are becoming standard practices for organizations that want to remain resilient. The companies that treat security as a core business strategy will be best positioned to protect assets, uphold compliance, and foster sustainable growth.

    Strengthen Your Cybersecurity Strategy

    At The Greentree Group, we help organizations protect critical data with comprehensive cybersecurity solutions. We work with federal, state, local, and commercial clients to identify threats, prevent vulnerabilities, and strengthen system security. Contact us today to take a proactive step toward securing your business.

    About The Author:

    Mackenzie Cole is an Analyst at The Greentree Group and a proud Wright State University alum, specializing in marketing strategy and analytics. With a passion for turning insights into impactful campaigns, Mackenzie has worked on a variety of multi-channel marketing initiatives with a focus on technology, creative storytelling, and connecting with local communities through purpose-driven marketing.

<< First  < Prev   1   2   3   4   5   ...   Next >  Last >> 

MEET OUR PARTNERS

Our Partners share a common goal: to connect, strengthen, and champion the technology community in our region. A Technology First Partner is an elite member leading the support, development, and expansion of Technology First services. In return, Partners improve community visibility and increase their revenue. Make a difference in our region and your business.

CHAMPION PARTNER

"The McCracken Group (TMG) is proud to be a Champion Partner of Technology First. We share a commitment to education, collaboration, and empowering technology professionals across our tech region. Together, guided by our core values, Doing the Right Thing, Always Learning, Building Strong Relationships, and Giving Back, we’re helping advance innovation and continuous growth across our region’s tech community."
Seth Marsh
Vice President Sales & Marketing, The McCracken Group
The McCracken Group

CORNERSTONE PARTNERS