Rethink Remote Access Policy: Mark David’s Advice

Posted: October 21, 2009 in Rethink Remote Access

Moving along with our series on how to rethink remote access, IT expert Mark David shares some thoughts on remote access policy. Mark is the Chief Security Officer and Systems Manager at Carta Worldwide, which deploys MasterCard branded prepaid chip cards.

The answer to this, as is often the case with IT solutions, is a multi-layered policy approach. For example, instead of having just a VPN account and a password, the user or the user’s manager must explicitly request access, the resources the user must have access too, and a mandatory maximum age of access past which the account must be renewed or terminated. A certificate to be issued from an issuing service that specializes in this form of distribution or from an in-house site that enforces this maximum age. ACLs and/or RADIUS type policy rules that specify each user’s authentication AND authorization along with a host of other mechanisms and measures to stop and mitigate threats.

Many administrators and contractors look to certificates to achieve a level of protection of the endpoints by forcing the end users to have a trusted certificate installed before their machine is let inside the network. However, the policy for their deployment can easily be self-defeating by using things like auto-enrollment (Active Directory distribution of certificates automatically to clients) and single sign-on services (most commonly associated with MS Active Directory) so that an attacker who simply gains access to the user’s laptop while it’s logged in to the user’s account can connect remotely with no barrier whatsoever.

When designing an IT solution, it’s important to keep in mind that to the end users, it’s a business solution before it’s a technology solution. This gives rise to the phenomena of security becoming stronger towards the middle of a connection, furthest away from end users and weaker towards the end points. However, the connection is the least exposed to compromise in the middle and most exposed at the endpoints.

Sure, in an ideal scenario, the goal would be to attain and then maintain trust in a given connection from the very first moment it is provisioned all the way until it is removed from the system. Therefore, the foundation should always be the access policy and the measures enforced by the policy to ensure only a trusted human being is given a connection into the network. Trust in the individual requesting the connection can be attained by methods such as performing an interview and/or requiring managerial authorization. Once trusted, the connection should be fit into an authentication scheme that requires dual-control, ie, a certificate and a password to make it more difficult for thieves to gain access after steeling a user’s laptop or some other form of illicit access to a terminal. And yes, NAC can fit in here as well to make sure the hardware and OS maintains a minimal degree of trust, and is free of malicious code. An authorization infrastructure (RADIUS, TACACS+, Active Directory or other) should also be in place and enforcing a policy to allow the user access to only required resources. At the application level, the policy should also enforce specific user rights to the maximum degree of granularity available to the specific application being accessed; a business user, for example, may need to download sales reports, but may have no need to access customer’s private information. While the connection is active, the connecting server(s) should be gathering user event logs which should be monitored at least daily; the purpose behind this is actually prevention—your users should know the corporate IT goons are combing over everything they do when connected, so this will keep them honest when tempted to share their laptop with someone who shouldn’t have access to it. Bad password lockouts don’t need to be very aggressive, but they do need to exist in order to thwart brute-force or dictionary attacks.

Besides the access control policy, there is, of course, network design considerations. An ideal scenario would include a honeypot and strict separation of production/sensitive servers from infrastructure/client systems. One cheap way to neatly accommodate that is do what I did for my company which deals with processing credit card transaction—place a multi-role firewall/router (Cisco, Checkpoint, Sonicwall, MS-ISA, etc) as the not only the edge security, but also the router for the entire network; that way, traffic flow can be explicitly controlled to and from certain hosts and networks in a seamless, centralized fashion without weakening security or incurring large costs. For example, if one user needs to remotely access systems in the business network but also has a need to download a file from the production network, it would just be a matter of creating an ACL to allow that specific user to access a specific port or protocol on a specific server.

In short, an ideal remote access system is a comprehensive, cradle to grave trust model based on a detailed access policy enforced by the relevant technology.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s