When IIS starts, the Web Administration Service initializes the http.sys namespace routing table with one entry for each application. This routing table determines to which application pool an application should be routed. When http.sys receives a request, it asks WAS to start up one or more worker processes to handle that application pool. This isolation of processes makes the web server as a whole more stable.
What is the Role of Http.Sys in IIS ?
HTTP.SYS is the kernel level components of IIS. All client request comes from client hit the HTTP.Sys of Kernel level. HTTP.SYS then makes a queue for each and every request for each and individual application pool based on the request.
Whenever we create any application pool IIS automatically registers the pool with HTTP.SYS to identify the particular during request processing.
Application pools are used to separate sets of IIS worker processes that share the same configuration and application boundaries. Application pools used to isolate our web application for better security, reliability, and availability and performance and keep running without impacting each other . The worker process serves as the process boundary that separates each application pool so that when one worker process or application is having an issue or recycles, other applications or worker processes are not affected.
One Application Pool can have multiple worker process Also
IIS: it has the websites and websites are working under the application pools.
for every site different application pool will be there or default app pool will be there.
if there are issues with app pool it will impact only one website, else if issues is for default app pool it will impact the all the sites..
we can set the logging for the sites at the server level for the IIS
http redirect: it is the future used to re direct to a page for a site in the downtimes.
here we have some authentications
Another important security feature is the ability to control the identity under which code is executed. Impersonation is when ASP.NET executes code in the context of an authenticated and authorized client. By default, ASP.NET does not use impersonation and instead executes all code using the same user account as the ASP.NET process, which is typically the ASPNET account. This is contrary to the default behavior of ASP, which uses impersonation by default. In Internet Information Services (IIS) 6, the default identity is the NetworkService account.
If you enable impersonation, ASP.NET can either impersonate the authenticated identity received from IIS or one specified in the application’s Web.config file.
•Impersonation is disabled. This is the default setting. For backward compatibility with ASP, you must enable impersonation and change the ASP.NET process identity to use the Local System account. In this instance, the ASP.NET thread runs using the process token of the application worker process regardless of which combination of IIS and ASP.NET authentication is used. By default, the process identity of the application worker process is the ASPNET account. For more information, see ASP.NET Process Identity.
Copy<identity impersonate="false" />
•Impersonation enabled. In this instance, ASP.NET impersonates the token passed to it by IIS, which is either an authenticated user or the anonymous Internet user account (IUSR_machinename).
Copy<identity impersonate="true" />
•Impersonation enabled for a specific identity. In this instance, ASP.NET impersonates the token generated using an identity specified in the Web.config file.
Impersonation enables ASP.NET to execute code and access resources in the context of an authenticated and authorized user, but only on the server where ASP.NET is running. To access resources located on another computer on behalf of an impersonated user requires authentication delegation (or delegation for short). You can think of delegation as a more powerful form of impersonation, as it enables impersonation across a network.
IIS first checks to make sure the incoming request comes from an IP address that is allowed access to the domain. If not it denies the request.
Next IIS performs its own user authentication if it configured to do so. By default IIS allows anonymous access, so requests are automatically authenticated, but you can change this default on a per – application basis with in IIS.
If the request is passed to ASP.net with an authenticated user, ASP.net checks to see whether impersonation is enabled. If impersonation is enabled, ASP.net acts as though it were the authenticated user. If not ASP.net acts with its own configured account.
Finally the identity from step 3 is used to request resources from the operating system. If ASP.net authentication can obtain all the necessary resources it grants the users request otherwise it is denied. Resources can include much more than just the ASP.net page itself you can also use .Net’s code access security features to extend this authorization step to disk files, Registry keys and other resources.
The windows Authentication provider lets you authenticates users based on their windows accounts. This provider uses IIS to perform the authentication and then passes the authenticated identity to your code. This is the default provided for ASP.net.
The passport authentication provider uses Microsoft’s passport service to authenticate users.
The forms authentication provider uses custom HTML forms to collect authentication information and lets you use your own logic to authenticate users. The user’s credentials are stored in a cookie for use during the session.
How the SSL works:
When you open website like facebook.com or gmail.com, first it creates TCP connection to web server of that site.
If we send the password of this accounts, there might be chance of it getting hacked.
In this case, we will be using the cryptography method.
In this, we will encrypt the data using the key and decrypt it using the same key. This is called symmetric key
This is not a good option as we would use same key for to decrypt it.
What if , we use one key for encryption and another key for decryption. This is called asymmetric key.
We will do the public key to encrypt the data and private key to decrypt the data.
Here is how the SSL hand shake works
After TCP connection established, then the process of SSL hand shake starts
For this hand shake, first client sends client hello message which contains client highest SSL version, ciphers /compressions and random data
And server responds with the SSL version that will be used ,ciphers/compressions and random data and session id for the session.
After this, server sends the digital certificate and this certificate serves 2 purposes,
1. Public key and also chain of certificates.
2. It establishes the identify of server, from where it is coming.
Then server sends server hello done message
Then client sends certificate verified message
Client again sends change cipher messages means from now on wards , the data sends over this http session will be encrypted
Browser sends finished message with all the messages that exchanged till now, to check none of the messages have been tampered
Server sends the change cypher messages
Server sends finished message with all the messages that exchanged till now, to check none of the messages have been tampered
At this point, ssl hand shake is set to be complete and the browser can generate Asymmetric secret key that will be used by session to encrypt and decrypt.
This key is only decrypted by server.
If the some validations failed, SSL connection will be terminated and browser shows the error.
what are the certificates installed at the server level.
edit permissions for the site is used to give the permissions for the site and share it etc.
ssl settings : we have settings here for the client certificate
to see the certificate i.e. binded to site.. go to site and right click and edit bindings and edit it so that u can see the certificate…
if we want to add/remove the services we can use the server manager, roles->web services and do the things.
Best Practices for IIS Architecture:
Web farm-> load balancing
Multiple types of clusters are there
1. Windows cluster: where we see the node A and node B sharing the storage area network(SAN)
Only active node is working at a time and another one is passive
Copy of SAN is placed in 2 machined, if one server is down another one is pointed to this
We have nodes that have SAN and for these nodes requests are coming using NLB mgr
we have environment like
a,b,c,d nodes with ip address 1 and NLB,
e,f,g,h, nodes with ip address 2 and NLB ,then in dns, these 2 IP address register for the http://www.nuggetlab.com
so some people will go to las Vegas network and some people will go to new York network
this is the large sites will build
Firewalls will block the traffic that comes through the ports
IIS is behind the firewall, so attacking on iis will be reduced as the firewall is there
Another way to reduce attacks on iis is don’t install the roles what you don’t need
Go to iis thru server manager, check the best practice analyzer
Scan that role every time and check what you missed some roles
if any malware is there, we can find it through IIS SEO toolkit
Network load balancer:
It is installed on many web servers with same iis configuration and iis contents
1. Over view of architecture
4. content replication and configuration replication
For every web server the network adapter is there and it is assigned with some IP address
And every computer is having unique mac address and ip address
Here Network adapter uses the MAC address for communication
When NLB is installed it will create a virtual mac address and it is attached to network adapter
in addition to NIC mac address there will be another MAC address i.e. Virtual MaC Address (Fake)
if you install NLB on different servers, then all the servers are in same NLB cluster then then all the servers are having same virtual MAC address
When the request came to virtual IP address then the all the computers that having the virtual MAC address will pick the request and NLB will decide the which server needs to process. and other server requests will be discarded.
When any server is not responding other servers in NLB is take care of the requests
Installation: go to server manager, features, add feature and check the nlb , install it
after installation go to nlbmgr from run and create new cluster with name as localhost as it takes local configuration, click on next and create a cluster IP address, give the ip address , use the subnet mask ,click on next, it have network address it is the MAC Address
NLB will operate on all ports.
we can add/edit rule
we can allow the requests from ports from our interest like 80 to 80 or 80 to 443.
By default, NLB allows all ports.
Affinity is Single
We have many webservers serving the request. When the user sends the request and session is created and session id is send to browser along with content
after that if user requested again with the same session id , then nlb routes the request to other server where the session is not stored, then web page comes for user to re login.
Its issue, so affinity is having 3 modes
1. NONE: every time user requested the requested to redirected to new server
this is the best performance mode; it is not compatible with in memory session state
But we can access the session by keeping the session ids in database
2.Single:this is done on clients IP address, at first time clients request, NLB remembers the ip address , after that every requests from that IP address NLB sends to same server
session state is stored in Memory, moderate performance
if the clients are going through proxy address where the clients are from the large network, then client will send requests from 2 or 3 ip address. so at that time it will be a problem
for intranet where the proxies are not used, it is fine
3. Network: worst performance
Whenever client comes from some network, it is routed to one server,
If the requests are coming from same network, then the requests ARE routed to same server
It is used for internet connections
If we have web farm, we need the iis configurations sync with all servers
We can use XCOPY or Robocopy to do the configuration.
When we do the configuration change for one iis, we can manually replicate these changes in the other servers using XCOPY, robocopy
or we can have the shared configuration in one place and it is shared by multiple servers
Another option is MSDeploy.exe tool for content replication and Sync IIS settings and it is free.
For commercial products, go to repliweb.com
Repliweb.com web deployment tool
Performance Tuning and Monitoring
Performance is low due to the user code
So we can find out that thing using the below
Http .sys listener in kernel mode is passing the requests to websites app pools and worker process of app pool will execute the requests.
if the worker process will get the requests of type html pages, then the performance will be high.
This is best case for iis.
IIS will be worst when executing the following
IIS performance will be fast when it executes the MS code
It will be slow when it executes the user code
Means Bad performance in IIS is not iis fault, its developer fault.
When the server configured for iis is also having the sql server, my sql, active directory etc… Then the iis needs to share the memory with them
1…WCAT: it will send the numerous static web page requests to IIS
we can do test like this
Check the sermon’s and by running wcat tool to send requests to IIS
and do the test in the other server
when u r running wcat , don’t use asp,asp.net,php etc.. only use html static request
in the perfmon.msc add the counter like w3wp, wap (app pool) and checks the performance
in the other server check the performance for user code , if there is huge difference ask the dev team to fix the code
2… IIS SCOM PACK:
—->check the graph in the perfmon.msc, if processor is taking high and iis is taking low then it means processor not enough to handle the iis request or there are some other resources to that are using processors
check the task manager check the processes that are consuming more memory than worker processes
For troubleshooting iis related issues,
1. first check the issue is related to iis configuration.
Go to server manager
in event viewer, application and service logs, Microsoft, windows, iis configuration,
we can see the iis logs related to administrative stuff and operation logs
2. If there is no configuration issue, then go to windows logs, check the application related and system related.
When the site is moving to other server, copy the web.config file so that we can’t do the Settings manually
Web service logs:
go to site and go to logging and troubleshoot the logs
web log expert analyzer is the tool for troubleshooting logs
For troubleshooting issues with SSL, we have tool like ssl diagnostics
For site we can turn the Failed request tracing, so that failed requests will be gone to that files
Failed request tracing rules:
We can add the rules like
Trace the php file — *.php, status code–> 401-599
Advanced web server configuration
1. Compression: Static and dynamic
2. Default documents/Directory browsing.
3. Custom errors
4. CGI and Fast CGI
5. Limits (BW and connections)
6. Http headers: i.e. http response headers.
IIS can compress the files so that now band width usage will be reduced.
at server level, we have configuration for static compression
Only compress files >2700 (2.7 kb)
Compress files will be stored in the path.
Per app pool disk limit 100 mb
It will compress the files up to 100 mb for every app pool. after 100 mb over new files will be overridden the old ones
It will be like caching.
Dynamic content compression:
It will increase processor utility and reduce the overall performance of server
2. Default documents:
It will give to client whenever he wants only like below
Here user not specifying any document name, so default documents that are in that directory will be checked against the default document module for a website in order. if it matches it will display that file
Try to place the default document in the top of list so that iis burden is reduces and performance will increase
Go to site and enable directory browsing module, so all the contents of directory will be seen in the directory format
We can see this by removing the default documents
Here we can see the custom error pages whenever page encounters any issues.
We can set the CGI and Fast CGI settings.
we can set the limits for a website for not to use the more band width so that other sites performance will not decrease.
We can set the limit for no of connections to connect for iis
We can set the limit for band width usage.
We can configure the limits at website level, not at server level….
http header response: it is sent along with content to client.
SSL and Digital certificates
How digital certificates work:
Digital certificate have 2 different encryption keys
1. private key: it is kept private, the person who uses that are called IIS only
2. Public key: anybody uses the public key
Both are called asymmetric keys.
Anything encrypted using public key can be decrypted using private key
Anything encrypted using private key can be decrypted using public key
When you sent a request to web server where it uses the certificate, then it process the request and got the data and encrypted the data using the private key and send it to client along with server certificate.
That certificate has the public key and it is extracted by browser and decrypt the content.
CA (certification authority): server can have certificates (go to internet options, content, certificates, we have trusted root certification authorities) here we have authorities so that the server can accept the certificates that are issued by that authorities..
when we go to amazon.com and do the checkout then it will go to https and gives the certificate and it checked in the certificates of our browser and checks the is this certificate is from the intended authorities.
Means the certificate is issued to amazon.com by authority and authority will take the all the company information and issues to amazon
1. Domain only: here the CA did not verify the company. it will issue the certificate to the company.
It will issue because you have domain so they issues certificate.
It will only encrypt the files; it will cost 20 bucks for every year
2. Normal SSL (Transport layer security)… normally called the ssl
It will cost 100 bucks for every yr., as company needs to prove their identity for every yr.
They have opportunity to re verify the identity
It will cost more CA needs to verify all things about certificates
Both the above certificates are issued to a host.
If you try to install it on other web server, it will give error. This certificate does not match the hostname
3. SAN (Wildcard) Certificate: subject alternative name
it will be used to issues certificates to *.mycompany.com
it is costlier than ssl, but it is cheaper as compared to ssl when buying certificates for multiples hosts.
4. Extended verification (EV)-> it’s a high grade security certificate.
so much process for this.
if we go to amazon.com after address bar, we will get the lock symbol. it’s a normal ssl certificate.
if you go to icicibank, after address bar, you will get the lock symbol with icici bank name.
And address bar is in green color.
If certificate is expired then address bar will turn to red or orange color
Certificates can be assigned to multiple websites, but they are managed at server level.
Go to server certificates, create certificate request, give the information of company details,
Cryptographic service provider -> rsa, bit length 1024
Export it to a file. Take the information from file and submit to the SSL Admin to get the certificate.
Import this certificate in IIS and do the binding to the site. and in ssl settings check the check box, allow ssl.
Create domain certificate request, it’s a certificate created by our self, no one issued the certificate.
FTP Server Administration:
Setting, configuring, administration ftp server
We can configure the ftp settings at server level and so that new ftp site will get those.
Or we can do that things at site level
At server level, we have ftp settings
1. Anonymous authentication
2. Basic authentication
It is in disabled state… Enable it
Ftp authorization rules:
We can add rules to allow the users to access the particular resource
rule1: allow all the users for read
Ftp directory browsing:
You can check the directory browsing style in MS dos or Unix format
All this settings are done at server level
Create a new ftp site and give the path and give the bindings as all unassigned, port 21,no ssl
After creating site, we will get the server level configuration settings to site
we can create folder in ftp path directory by going to path, or we can done by adding virtual directory, both are same, and the difference can be known as some symbol appears on directories will differentiate.
Ftp messages: we can add some messages so that the users will see when they view the ftp site
Ftp request filtering:
You can add the file extensions so that only we come
ftp logging: same as like normal iis site logging
Ftp ipv4 address and site restrictions: here we can add the ip address that will be allowed or that will be denied
Ftp user isolation:
Isolate the users
Don’t isolate users, start users in
1. Ftp root directory: when users logged, they will be in root directory
2. User name directory: when user logged they will be in their directory, but they will see other directories
Isolated users, RESTRICT THE USERS TO THE FOLLOWING,
1. User name directory (disable global virtual directories)
2. User name physical directory (enable global virtual directories)
If you go isolate users, then choose 1st option
Extending the iis to give more functionality
CHI and other extensions
Difference between extension and filters
Client sends the request to server, http listener identifies the website and sends the request to app pool and placed in the queue of the worker process
And worker process works on the request
Before worker process executes the request, filters will be executed
Filters are written in internet service application programming interface(ISAPI)
Filters take the request and will modify that one, and change the url
Here no processing is done… Filters will do the preprocessing the request and they can preview the result after that request is processed.
in case of request for static page, then iis need no help, it will take the page from disk and put it in memory and sends to client thru network
in case of dynamic pages, where php, asp.net code needs to be executed, then here extensions come into picture
When the php file is accessed, then iis loads the php extension and this extension will execute the script and give it back to iis and then iis will give to client
There can be loading of multiple extensions as it request are for of diff types
Filters is of isapi only
But extensions are of types
3. fast CGI
do any these configurations at server level
Go to iis and server, modules, all these are used for the iis to extend iis functionality
Handler mappings: here we can configure the extensions of IIS
Here we can have the handlers that handles the requests coming from clients
Here we can see the details
asp,cgi,handler mappings,fast cgi,modules,isapi filters,isapi and cgi restrictions, handler mappings
incoming requests are coming from web to network interface are listened by http listener in kernel mode, and it sees the site bindings table and sends the requests to app pools that associated with site and app pools have worker process that executes the request and sends the result back to client
App pools have min 1 WP and max is more than 1
One app pool can serves multiple websites or one app pool can work with one site
In task manager, in the processes tab, we have worker processes called w3wp.exe
We have pipeline mode: integrated mode: better performance and better stability and it is modern one
It does not execute the code that runs under IIS 5 and below.
For that classic mode is used
It comes under basic settings
Generally developers did not write the good code so that application leaks memory
iis will consume more memory. so Microsoft decides to work with this type so that memory leakage will be reduced and performance will be increased
We have recycling settings
1. Regular interval times (in minutes)
1740 (29 hours
)for every 29 hours WP kills itself and immediately new worker process created and it will serve the request for app pool
2. fixed no of request: 300– after every 300 requests worker process will be killed and new one will start
3. Specified time: if we specify time at that time the worker process will be killed and new one will be created
Memory based maxims:
1. Virtual memory usage: if put some memory length 10mb, if the worker process occupies more than that worker process will be killed
We can keep track of those things in the event log
Advanced settings of App pool:
queue length: 1000
WP executes the request one by 1… Requests are placed in the queue by the http listener that is in kernel mode.
Http listener then after 1000 request over then it gives the error service unavailable
Processor affinity mask: false
It should be false and cpu will decide the affinity
It is the hexadecimal mask that forces the worker process of the app pool to run on specific cpu
Identity: worker process used this identity to access the resources for the request
idle time out: worker process sits idle for 20 mins and after that it will be killed it self
Maximum worker process:
Max no of worker process used to handle the requests coming to IIS
If it is >1 then it is called web garden
Process orphaning: it is used for debugging. normally for developers
iis instead of killing the Worker process, it will continue, so developers will see the inform it will be increasing the memory
if it needs then only send the requests
Disabling the recycle for configuration: false
Generally, iis will read the new configuration after killing the worker process and whenever any changes done to configuration
Disabled overlapped recycle:
Generally, before creating new worker process, then iis will orphaned old one and created new worker process and load the data and after loading data then it kills new one