VMs in general are more memory-bound than CPU-bound (exceptions for things like SQL servers, encoders, etc). Hypervisors are generally pretty good about spreading VMs across a pool of CPUs and grabbing whichever is idle at the time. You can manually set affinities to always use specific cores, but it's generally wasteful to do so.
One caveat (at least with how vSphere 5.x worked) is the hypervisor has to claim all CPUs at the same time in order to do work, even if the other guest CPUs are idle. For example, if I have a 4 core VM on a 6 core host, it has to wait for 4 of the 6 to be free before the VM gets to do anything. So sometimes VMs with less CPUs can outperform one with more for the same workload. Getting proper measurements on your loads (peak/avg CPU, memory, disk IOPS etc) is critical to a good migration.
One caveat (at least with how vSphere 5.x worked) is the hypervisor has to claim all CPUs at the same time in order to do work, even if the other guest CPUs are idle. For example, if I have a 4 core VM on a 6 core host, it has to wait for 4 of the 6 to be free before the VM gets to do anything. So sometimes VMs with less CPUs can outperform one with more for the same workload. Getting proper measurements on your loads (peak/avg CPU, memory, disk IOPS etc) is critical to a good migration.