Windows Server
ADDS Forest and Domain Functional Levels
Regularly customers ask « why should I raise ADDS functional level »?Here are the anwsers on each functional level. Whatever the application in your AD, raise the level at the maximum Windows Server 2016 Supported Domain Controller Operating System: Windows Server 2016 forest functional level features Windows Server 2016 domain functional level features Windows Server 2012R2 Supported Domain Controller Operating System: Windows Server 2012R2 forest functional level features Windows Server 2012R2 domain functional level features Windows Server 2012 Supported Domain Controller Operating System: Windows Server 2012 forest functional level features Windows Server 2012 domain functional level features Windows Server 2008R2 Supported Domain Controller Operating System: Windows Server 2008R2 forest functional level features Windows Server 2008R2 domain functional level features Windows Server 2008 Supported Domain Controller Operating System: Windows Server 2008 forest functional level features Windows Server 2008 domain functional level features Windows Server 2003 Supported Domain Controller Operating System: Windows Server 2003 forest functional level features Windows Server 2003 domain functional level features Windows 2000 Supported Domain Controller Operating System: Windows 2000 native forest functional level features Windows 2000 native domain functional level features
WUAUCLT no more working on Windows 10? Please replace with USOCLIENT.EXE
Windows Update command line tool WUAUCLT.EXE is no more working on Windows 10 and Server 2016? Please replace with USOCLIENT.EXE You could have noticed that command line WUAUCLT /DetectNow /ReportNow /ResetAuthorization is no more working on Windows 10. All command have been replaced with command line USOCLIENT.EXE RefreshSettings StartScan StartDownload StartInstall You won’t find any documentation on that tool, but more usage details with the builtin tasks inAdmin Tools / Task Scheduler / Microsoft / Windows / UpdateOrchestrator The following PowerShell lines will give you the details : PS C:\> Get-ScheduledTask -TaskPath ‘\Microsoft\Windows\UpdateOrchestrator\’ ` >> | Select-Object @{Expression={$_.TaskName};Label= »TaskName »}, ` >> @{Expression={$_.Actions.Execute + ‘ ‘ + $_.Actions.Arguments};Label= »CommandLine »} TaskName CommandLine ——– ———– AC Power Download %systemroot%\system32\usoclient.exe StartDownload Maintenance Install %systemroot%\system32\usoclient.exe StartInstall MusUx_LogonUpdateResults %systemroot%\system32\MusNotification.exe LogonUpdateResults MusUx_UpdateInterval %systemroot%\system32\MusNotification.exe Display Policy Install %systemroot%\system32\usoclient.exe StartInstall Reboot %systemroot%\system32\MusNotification.exe ReadyToReboot Refresh Settings %systemroot%\system32\usoclient.exe RefreshSettings Resume On Boot %systemroot%\system32\usoclient.exe ResumeUpdate Schedule Scan %systemroot%\system32\usoclient.exe StartScan USO_UxBroker_Display %systemroot%\system32\MusNotification.exe Display USO_UxBroker_ReadyToReboot %systemroot%\system32\MusNotification.exe ReadyToReboot
Performance tuning Windows Server 2016 containers
Introduction Windows Server 2016 is the first version of Windows to ship support for container technology built in to the OS. In Server 2016, two types of containers are available: Windows Server Containers and Hyper-V Containers. Each container type supports either the Server Core or Nano Server SKU of Windows Server 2016. These configurations have different performance implications which we detail below to help you understand which is right for your scenarios. In addition, we detail performance impacting configurations, and describe the tradeoffs with each of those options. Windows Server Container and Hyper-V Containers Windows Server Container and Hyper-V containers offer many of the same portability and consistency benefits but differ in terms of their isolation guarantees and performance characteristsics. Windows Server Containers provide application isolation through process and namespace isolation technology. A Windows Server container shares a kernel with the container host and all containers running on the host. Hyper-V Containers expand on the isolation provided by Windows Server Containers by running each container in a highly optimized virtual machine. In this configuration the kernel of the container host is not shared with the Hyper-V Containers. The additional isolation provided by Hyper-V containers is achieved in large part by a hypervisor layer of isolation between the container and the container host. This affects container density as, unlike Windows Server Containers, less sharing of system files and binaries can occur, resulting in an overall larger storage and memory footprint. In addition there is the expected additional overhead in some network, storage io, and CPU paths. Nano Server and Server Core Windows Server Containers and Hyper-V containers offer support for Server Core and for a new installation option available in Windows Server 2016 : Nano Server. Nano Server is a remotely administered server operating system optimized for private clouds and datacenters. It is similar to Windows Server in Server Core mode, but significantly smaller, has no local logon capability, and only supports 64-bit applications, tools, and agents. It takes up far less disk space and starts faster. Container Start Up Time Container start up time is a key metric in many of the scenarios that containers offer the greatest benefit. As such, understanding how to best optimize for container start up time is critical. Below are some tuning trade-offs to understand to achieve improved start up time. First Logon Microsoft ships a base image for both Nano Server and Server Core. The base image which ships for Server Core has been optimized by removing the start-up time overhead associated with first logon (OOBE). This is not the case with Nano Server base image. However, this cost can be removed from Nano Server based images by committing at least one layer to the container image. Subsequent container starts from the image will not incur the first logon cost. Scratch Space Location Containers, by default, use a temporary scratch space on the container host’s system drive media for storage during the lifetime of the running container. This serves as the container’s system drive, and as such many of the writes and reads done in container operation follow this path. For host systems where the system drive exists on spinning disk magnetic media (HDDs) but faster storage media is available (faster HDDs or SSDs), it is possible to move the container scratch space to a different drive. This is achieved by using the dockerd –g command. This command is global, and will affect all containers running on the system. Nested Hyper-V Containers Hyper-V for Windows Server 2016 introduces nested hypervisor support. That is, it is now possible to run a virtual machine from within a virtual machine. This opens up many useful scenarios but also exaggerates some performance impact that the hypervisor incurs, as there are two level of hypervisors running above the physical host. For containers, this has an impact when running a Hyper-V container inside a virtual machine. Since a Hyper-V Container offers isolation through a hypervisor layer between itself and the container host, when the container host is a Hyper-V based virtual machine, there is performance overhead associated in terms of container start-up time, storage io, network io and throughput, and CPU. Storage Mounted Data Volumes Containers offer the ability to use the container host system drive for the container scratch space. However, the container scratch space has a life span equal to that of the container. That is, when the container is stopped, the scratch space and all associated data goes away. However, there are many scenarios in which having data persist independent of container lifetime is desired. In these cases, we support mounting data volumes from the container host into the container. For Windows Server Containers, there is neglible IO path overhead associated with mounted data volumes (near native performance). However, when mounting data volumes into Hyper-V containers, there is some IO performance degradation in that path. In addition, this impact is exaggerated when running Hyper-V containers inside of virtual machines. Scratch Space Both Windows Server Containers and Hyper-V containers provide a 20GB dynamic VHD for the container scratch space by default. For both container types, the container OS takes up a portion of that space, and this is true for every container started. Thus it is important to remember that every container started has some storage impact, and depending on the workload can write up to 20GB of the backing storage media. Server storage configurations should be designed with this in mind. (can we configure scratch size) Networking Windows Server Containers and Hyper-V containers offer a variety of networking modes to best suit the needs of differing networking configurations. Each of these options present their own performance characteristics. Windows Network Address Translation (WinNAT) Each container will receive an IP address from an internal, private IP prefix (e.g. 172.16.0.0/12). Port forwarding / mapping from the container host to container endpoints is supported. Docker creates a NAT network by default when the dockerd first runs. Of these three modes, the NAT configuration is the most expensive network IO path, but has the
Bulk Deploy Microsoft Windows Nano Server 2016 and join domain
Hi, As I searched the web how to automatically deploy NANO server and join it to Active Directory, I could only find incomplete solutions.I saw many things with manual copy of the blob generated with djoin.exe …I went deep dive into « :\NanoServer\NanoServerImageGenerator\NanoServerImageGenerator.psm1 » and discovered everything is in the box.For that I’m working with the last version of Windows 2016: Win_Svr_STD_Core_and_DataCtr_Core_2016_64Bit_English_-3_MLF_X21-30350.ISO You can manually create your own unattend.xml thanks to WAIK10, but no need for that; As I said, everything is in the module NanoServerImageGenerator.psm1 Also, as I am quite lazy and want the simplest code, this is what I wrote and which generates my disk.vhdx <# .NOTES =========================================================================== Created with: SAPIEN Technologies, Inc., PowerShell Studio 2017 v5.4.136 Created on: 21/03/2017 08:11 Created by: Jean-Yves Moschetto jeanyves.moschetto@yoursystems.eu Organization: CARIB INFRA – YOURSYSTEMS Filename: =========================================================================== – Copy to C:\EXPLOIT\NanoServer – BasePath doesn’t support network drive or UNC path – To be executed from a member Windows 10/2016 computer, as ‘Domain admin’ for account provisioning – Must have a DHCP with DNS to join domain #> $Configurationdata = @{ Global = @{ BasePath = ‘C:\EXPLOIT\NanoServer’; MediaPath = ‘D:\’; TargetPath = ‘\\10.10.10.2\c$\VM\VHD’; Domain = ‘dom2016.local’; } Servers = @( @{ ComputerName = ‘Nano1’; Edition = ‘Datacenter’; DeploymentType = ‘Guest’ }, @{ ComputerName = ‘Nano2’; Edition = ‘Standard’; DeploymentType = ‘Guest’ } ) } Import-Module « $($Configurationdata.Global.BasePath)\NanoServerImageGenerator\NanoServerImageGenerator.psm1 » if ($Credential -eq $null) { $Credential = Get-Credential -Message ‘Enter administrator password’ -UserName Administrator } ForEach ($Server in $Configurationdata.Servers) { djoin.exe /provision /domain $($Configurationdata.Global.Domain) /machine $($Server.ComputerName) /savefile « $($Configurationdata.Global.BasePath)\$($Server.ComputerName).blob » /reuse New-NanoServerImage -DeploymentType $Server.DeploymentType ` -Edition $Server.Edition ` -MediaPath $Configurationdata.Global.MediaPath ` -BasePath $Configurationdata.Global.BasePath ` -TargetPath « $($Configurationdata.Global.TargetPath)\$($Server.ComputerName).vhdx » ` -AdministratorPassword $Credential.Password ` -DomainBlobPath « $($Configurationdata.Global.BasePath)\$($Server.ComputerName).blob » ` -EnableRemoteManagementPort:$true } Just create your virtual machine manually or from script, attach the just created vhdx disk and boot. Just create your virtual machine manually or from script, attach the just created vhdx disk and boot. It just takes 3 seconds to boot and here it is … :