There’s a lot of misinformation in this thread.
A lot of the posts here describe microsegmentation, not zero-trust. That said, microsegmentation is absolutely a good thing, if you can handle the administrative overhead, and you should look into it if your enterprise will pay for a solution that provides it. It’s probably more bang for the buck than zero-trust for you.
There are myriad posts here about microsegmentation, so I won’t spend too much time there, but tl;dr identify flows that are important for the applications you care about, whitelist those flows, and blacklist everything else. Expect a lot of application breakage during this process. You can try to observe for awhile before moving to a blacklist model, but invariably you’ll miss flows that happen only on occasion or exception. Make sure whatever solution you use is prepared to log those historically so you can troubleshoot later.
We are planning to segment the network with different use cases such as users VLAN A should not talk to user VLAN B, IOT VLAN should not communicate with users and server VLAN.
Consider this more granularly, maybe - the ‘micro’ in microsegmentation. IOT devices rarely need to communicate with other IOT devices, and east-west traffic is similarly rare (though not unheard of) among users. Most traffic in the enterprise (and really, outside of the datacenter) is north-south.
Zero-trust represents a system where the thing you’re connecting to your enterprise network is untrusted until proven otherwise, but also (crucially left out in many posts) the thing connecting doesn’t trust the network (or peer). If someone forklifts a server out of your datacenter, it shouldn’t give up the keys to the kingdom if attached to a switch in a warehouse out in Maryland.
The things you care about should strongly authenticate via something rooted in hardware, like a hardware certificate. For users, you can use a hardware certificate plus user credentials (something you know plus something you have - inherent multifactor). This gets you part of the way there - now you know that the hardware thing is at least the thing you issued. What about the “stuff” running on it? Not just applications, but the O/S, the OS Loader, etc.
Ideally this is handled via remote attestation signed by keys derived from something like a trusted platform module (TPM). You want to ensure the things you are attaching in sensitive areas are measured and compared securely against known-good values. For **actual** zero-trust, you need to measure everything in hardware before it runs (so it can’t self-modify), and then compare against known good values before allowing something access.
The idea here is that you are protecting against low-level attacks - if you can’t trust the OS loader, you cant trust the O/S, and if you can’t trust the O/S, you can’t trust anything running on it, etc. etc. This is “hard” and frequently onerous with existing solutions (and you have to strongly protect your remote attestation endpoint and system itself). Something like Secure Boot is usually considered to be “good enough” for most enterprises as a starting point.
Attestation and authentication should be done mutually - a new device should verify its peer in addition to expecting to authenticate itself.
Note that network devices such as routers, switches, and firewalls are also considered untrusted in this model, and should be strongly authenticated and authorized where possible. This isn’t just the realm of “server guys.” 