Friday, August 19, 2016

Setting up .NET Core on Ubuntu and publishing a website

What can I say, I love Linux, but I also love Visual Studio and C#. When I first learned about .NET Core being able to run on Linux, it was the joining of two worlds. In this post, I hope to walk through the setup and publishing a website using these technologies.
Prep the environment -

  • Install Ubuntu 16.04.01 server
  • standard system utilities
  • OpenSSH server

Install .NET Core for Ubuntu
1. Add .NET Core to your sources-list
sudo sh -c 'echo "deb [arch=amd64] https://apt-mo.trafficmanager.net/repos/dotnet-release/ xenial main" > /etc/apt/sources.list.d/dotnetdev.list'
sudo apt-key adv --keyserver apt-mo.trafficmanager.net --recv-keys 417A0893

2. Update and install .NET Core SDK
sudo apt-get update
sudo apt-get install dotnet-dev-1.0.0-preview2-003121
sudo apt-get install npm
sudo npm install -g bower grunt-cli gulp

3. Configure the firewall to only allow SSH and web traffic
sudo apt-get install ufw
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw allow 22/tcp
sudo ufw enable

4. Create your first .NET Core web app
Install Visual Studio 2015 Community Edition: https://www.visualstudio.com/en-us/visual-studio-homepage-vs.aspx
Install Visual Studio 2015 Update 3: https://go.microsoft.com/fwlink/?LinkId=691129

Open Visual Studio Community Edition and create a New Project 
For the project template, select Installed | Templates | Visual C# | Web | ASP.NET Core Web Application (.NET Core)
Name the project HelloWorld and click OK to create the project
Select Web Application, clear the checkbox Host in the cloud, and click OK
When the project opens, you can select Debug | Start Debugging, or click F5 to see the contents of the website

Publish the website
In Visual Studio Solution Explorer, under the src folder, right-click the project HelloWorld and click Deploy. You may need to configure a deployment template to deploy to the local file system.

Using WinSCP, copy the published folder to /var/aspnetcore/HelloWorld
Set permissions to root - 
sudo chown root:root /var/aspnetcore -R

5. Install / configure NGINX
sudo apt-get install nginx
sudo service nginx start
sudo mv /etc/nginx/sites-available/default /etc/nginx/sites-available/default.old
sudo nano /etc/nginx/sites-available/default

server {
    listen 80;
    location / {
        proxy_pass http://localhost:5000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection keep-alive;
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}



6. Setup and configure Supervisor
sudo apt-get install supervisor
sudo nano /etc/supervisor/conf.d/helloworld.conf

[program:helloworld]
command=/usr/bin/dotnet /var/aspnetcore/HelloWorld/HelloWorld.dll
directory=/var/aspnetcore/HelloWorld/
autostart=true
autorestart=true
stderr_logfile=/var/log/helloworld.err.log
stdout_logfile=/var/log/helloworld.out.log
environment=ASPNETCORE_ENVIRONMENT=Production
user=www-data
stopsignal=INT

sudo service supervisor stop
sudo service supervisor start
sudo update-rc.d supervisor enable
The website should now be visible. Supervisor should automatically start the website if a system reboot occurs or if the site crashes.


References:
Installing .NET Core: https://www.microsoft.com/net/core#ubuntu
Getting started with .NET Core: https://docs.asp.net/en/latest/getting-started.html
Publishing a website to .NET Core: https://docs.asp.net/en/latest/publishing/linuxproduction.html


Tuesday, March 22, 2016

One method to synchronize SQL users in AlwaysOn groups

I have recently started working with AlwaysOn availability groups. The suggested method to deal with users in AlwaysOn groups is with partially contained databases, then the user is in the DB only, which is replicated to all nodes. However, there are some situations where you cant make a DB a contained DB.

One of the things I like to do with SQL is create a read-only SQL user for reporting name 'ReportUser'. Not knowing the intricacies of AlwaysOn, I created the SQL user on both nodes using traditional SQL. This worked at first, but when I failed over the node, my access stopped working.

After some googling, I found the answers at https://support.microsoft.com/en-us/kb/918992 and https://msdn.microsoft.com/en-us/library/ms175475.aspx. It turns out that each user has a SID, which is different on each server, but only one matched the SID in the DB.

How I resolved it
I started by looking at the Orphaned User report
exec sp_change_users_login @Action='Report'

This returned the SID of my ReportUser account in the database. I failed the DB over to the other server and then deleted the ReportUser account from my system.
Then, I recreated my user, but this time specifying the SID:
CREATE LOGIN [ReportUser] WITH PASSWORD = 'oiu234rno89aSdf', SID = 0x144186B7023F4D43BDA437284FB3E1D9, DEFAULT_DATABASE = [master], CHECK_POLICY = OFF, CHECK_EXPIRATION = OFF


Now, since the SIDs match on the two servers, when the DB migrates between the servers, my account works.

Monday, November 30, 2015

Parsing ASA logs using ELK and my central syslog archive

Previously, I had setup a centralized syslog archive and directed all of my network devices to report into it. This turned out to be great timing, as there was a network issue, and having the logs in one place made the job that much easier. The next step however, is to put the information into some sort of database for parsing and easy searching.

I did some looking around, specifically at Splunk, but found that it had a fairly high barrier of entry. Instead, I found an open-source solution called ELK (Elasticsearch, Logstash, Kibana). I had heard several people mention this as an entry-level "Big-Data" solution, so I gave it a go.

For this service, I decided to build an Ubuntu 15.04 server because I am more familiar with the packaging/update process, and because the online articles used it for their platforms.

Basic ELK setup:
For the basic ELK install and configure, I followed the directions posted at https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-ubuntu-14-04, with a few exceptions.

  • A newer version of the ELK stack had just been released, so I headed to https://www.elastic.co/downloads and downloaded Elasticsearch 2.1, Logstash 2.1, and Kibana 4.3
  • The newer versions appear to be restricted to the local system by default, so I never edited the elasticsearch.yml or kibana.yml to restrict only to localhost
  • I didn't password protect my URL
  • I didnt setup the Logstash Forwarder package (yet)
Bringing in the logs: At this point, we have a fully functional ELK stack, but no data is being delivered to it. Since my syslog server is a RHEL 7 host, I followed the Logstash Forwarder setup directions at https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-centos-7. Here, I configured the logstash forwarder to read all logs under /var/log/syslog/*.log, and forward them to my ELK server.

Now we should be getting tons of data into our ELK server, and it should be appearing in Kibana. If the data isnt appearing,

  • Confirm the services are running (on both systems)
  • Confirm the send/receive ports are correct
  • Confirm the firewall isnt blocking the traffic
  • Use TCPDUMP on both systems to ensure the traffic is being sent and received
  • Look in elasticsearch.log and elasticsearch.err for any errors


Making sense of the ASA logs:
Now that we have data appearing in Kibana, we can start parsing it. This can first be done by simply typing queries in the Kibana screen. However, we can also categorize data and have Logstash parse out additional categories. I am going to start with the logs from my Cisco ASA.
There are lots of examples online on how to parse the ASA messages, but each one is either missing something, or is slightly off because of version differences. Ultimately I found https://jackhanington.com/blog/2014/04/21/using-logstash-elasticsearch-and-kibana-for-cisco-asa-syslog-message-analysis/ and https://jackhanington.com/blog/2015/06/16/send-cisco-asa-syslogs-to-elasticsearch-using-logstash/ to be the most complete.
One thing to note - These pages appear to categorize the ASA traffic as a new type. This appears to be important, otherwise some of the field data types will conflict with others, causing your error logs to fill up.

Once the ASA logs are being parsed, we can now begin to query based on source or destination addresses, service types, success/failure, and so on. A little work in Kibana, and we can create a dashboard showing the top utilizers, source locations of traffic, easily identify spikes and so on.




For posterity, below are the configuration files for my environment:
/etc/elasticsearch/elasticsearch.yml  -- no change from default
/etc/elasticsearch/logging.yml  -- no change from default
/etc/logstash/conf.d/01-lumberjack-input.conf
input {
  lumberjack {
    port => 5043
    type => "logs"
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}


/etc/logstash/conf.d/10-syslog.conf
filter {
  if [type] == "syslog" {
    mutate {
      add_tag => [ "syslog" ]
    }
    grok {
      match => { "message" => "%{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}


/etc/logstash/conf.d/20-cisco-asa.conf
filter {
  if "syslog" in [tags] and "pre-processed" not in [tags] {
    if "%ASA-" in [message] {
      mutate {
        add_tag => [ "pre-processed", "Firewall", "ASA" ]
        replace => { "type" =>  "cisco-fw" }
      }
grok {
        patterns_dir => "/opt/logstash/patterns/custom"
                match => ["message", "%{GREEDYDATA:cisco_message}"]
}

grok {
match => [
"cisco_message", "%{CISCOFW106001}",
"cisco_message", "%{CISCOFW106006_106007_106010}",
"cisco_message", "%{CISCOFW106014}",
"cisco_message", "%{CISCOFW106015}",
"cisco_message", "%{CISCOFW106021}",
"cisco_message", "%{CISCOFW106023}",
"cisco_message", "%{CISCOFW106100}",
"cisco_message", "%{CISCOFW110002}",
"cisco_message", "%{CISCOFW302010}",
"cisco_message", "%{CISCOFW302013_302014_302015_302016}",
"cisco_message", "%{CISCOFW302020_302021}",
"cisco_message", "%{CISCOFW305011}",
"cisco_message", "%{CISCOFW313001_313004_313008}",
"cisco_message", "%{CISCOFW313005}",
"cisco_message", "%{CISCOFW402117}",
"cisco_message", "%{CISCOFW402119}",
"cisco_message", "%{CISCOFW419001}",
"cisco_message", "%{CISCOFW419002}",
"cisco_message", "%{CISCOFW500004}",
"cisco_message", "%{CISCOFW602303_602304}",
"cisco_message", "%{CISCOFW710001_710002_710003_710005_710006}",
"cisco_message", "%{CISCOFW713172}",
"cisco_message", "%{CISCOFW733100}",
"cisco_message", "%{CISCOFW113039}"
]
}

syslog_pri { }

geoip {
                add_tag => [ "GeoIP" ]
                database => "/opt/logstash/vendor/geoip/GeoLiteCity.dat"
                source => "src_ip"
        }

if [geoip][city_name]      == "" { mutate { remove_field => "[geoip][city_name]" } }
        if [geoip][continent_code] == "" { mutate { remove_field => "[geoip][continent_code]" } }
        if [geoip][country_code2]  == "" { mutate { remove_field => "[geoip][country_code2]" } }
        if [geoip][country_code3]  == "" { mutate { remove_field => "[geoip][country_code3]" } }
        if [geoip][country_name]   == "" { mutate { remove_field => "[geoip][country_name]" } }
        if [geoip][latitude]       == "" { mutate { remove_field => "[geoip][latitude]" } }
        if [geoip][longitude]      == "" { mutate { remove_field => "[geoip][longitude]" } }
        if [geoip][postal_code]    == "" { mutate { remove_field => "[geoip][postal_code]" } }
        if [geoip][region_name]    == "" { mutate { remove_field => "[geoip][region_name]" } }
        if [geoip][time_zone]      == "" { mutate { remove_field => "[geoip][time_zone]" } }

      mutate {
        replace => [ "host", "%{sysloghost}" ]
      }
    }
  }
}


/etc/logstash/conf.d/90-lumberjack-output.conf
output {
  elasticsearch { hosts => ["localhost:9200"] }
#  stdout { codec => rubydebug }
}

Wednesday, November 18, 2015

Setting up a central syslog archive

Working in a large'ish environment, things will occasionally break, then leave you trying to recover the pieces. One the of the key fixes for this is to setup a central logging server so that you can see logs from all of your network devices at a glance and easily correlate between events.

I also recently have been looking into the security needs at my environment. One of the top items suggested to have is a logging archive - that is, an archive of all the logs from critical devices for up to 1 year. These two items are close enough together, that we should be able to easily solve both with 1 solution.

When researching for a solution, there were tons available online, and some as online services, but they all seemed more complex than what I was looking for. I was simply looking for a basic log management solution, maybe more in the future, but that is all for now.

I decided to use a RedHat 7.1 server running rSyslog as the foundation for my solution. I chose RedHat because it is used extensivly in my organization and someone else could easily take over it if I am not available, and rSyslog because it came with RHEL and had good reviews.

Once RHEL was installed, the configuration was fairly simple;
I first started off by adding a second disk for storing of the syslog messages. This allows me to easily segment the system from the messages, and make sure that one doesnt impact the other. I decided to store my logs in /var/log/syslog
mkdir /var/log/syslog
fdisk /dev/sdb
mkfs -t xfs /dev/sdb1
vi /etc/fstab
mount -a
df -h

Next, was to open the firewall to receive the syslog messages
firewall-cmd --zone=public --add-port=514/udp --permanent
firewall-cmd --zone=public --add-port=514/tcp --permanent
firewall-cmd --reload
iptables-save

Now that the system can receive syslog messages, we need to configure rSyslog to listen for messages. To do this, we edit the rsyslog.conf file and uncomment the following lines to listen on both TCP and UDP ports 514
vi /etc/rsyslog.conf
# Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514
# Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514
service rsyslog restart

Once you are receiving syslog files, the next major hurdle is to separate the logs into separate files. The goal is to have a separate log file for each device so that you can view the logs for a router or switch without having to weed through messages from other devices. To accomplish this, we edit the rsyslog.conf file as shown in  http://www.rsyslog.com/storing-messages-from-a-remote-system-into-a-specific-file/.
vi /etc/rsyslog.conf
$template PerHostLog,"/var/log/syslog/%HOSTNAME%.log"
if $fromhost-ip startswith '10.' then -?PerHostLog
& STOP
service rsyslog restart

Lastly, is to configure LogRotate to manage the logs on the system.
vi /etc/logrotate.d/syslog
/var/log/syslog/*.log
{
copytruncate
rotate 365
daily
missingok
dateext
notifempty
delaycompress
create 664 root root
compress
maxage 366
sharedscripts
lastaction
/bin/systemctl restart  rsyslog.service
endscript
}



This should keep a years worth of logs in one place for easy review. Next, my goal is to send Windows event logs to this server, and use this system as a feed for a SIEM or ELK solution.

Monday, October 12, 2015

F5 201 - TMOS Administration Exam

I passed the F5 101 exam a while back with fairly good marks. Given that the subject matter was so broad, I was surprised at the depth of some of the questions. I won't give any specifics, but if you are in process of studying for the 101 exam, make sure you understand everything listed in the exam blueprint.

Now I am on to the F5 201 exam, TMOS Administration. According to the exam blueprint (https://www.f5.com/pdf/certification/exams/blueprint-tmos-administration.pdf), the scope of the exam appears to be much more narrow than the 101 exam, which is kind of scary. However, looking at the objectives and examples, it appears to be much more "real life" scenarios - so it should be easier than memorizing obscure datum about the TCP Headers.
If you haven't already, download the BigIP Virtual Edition and get to testing...




Section 1: Troubleshoot basic virtual server connectivity issues 

Objective 1.01 Given a connectivity troubleshooting situation, consider the packet and virtual
server processing order
Examples

  • Explain how a packet is processed once it arrives at device (connection table, packet filters, etc.)
  • Explain how a virtual server processes a request (most specific to least specific)
  • Given a specific connectivity issue, isolate where the problem might be according to the processing order

Objective 1.02 Identify the reason a virtual server is not working as expected
Examples

  • Determine the state of a virtual server (offline, enabled, etc.)
  • Determine if a virtual server is configured for the proper listening port
  • Determine if a virtual server is configured with the proper IP address configuration
  • Determine if the virtual server is configured with the appropriate profiles
  • Determine if the pool configuration has an effect on virtual server state
  • Determine which tools to use in order to diagnose the issue
  • Explain the difference between the virtual servers status definitions

Objective 1.03 Identify the reason a pool member has been marked down by health monitors
Examples

  • Discuss the effects of health monitors on the status of pool members/nodes
  • Determine the state and availability of the pool member/node in question
  • Verify the pool member/node Ratio configuration
  • Verify the pool member/node connection configuration and count

Objective 1.04 Identify a pool member not in the active priority group
Examples

  • Explain the concept of “persistence”
  • Verify the type of persistence profile assigned to the virtual server in question
  • Validate the expected persistence behavior
  • Differentiate between fallback and primary persistence
  • Use the appropriate tool to troubleshoot persistence

Objective 1.05 Identify traffic diverted due to persistence record
Objective 1.06 Identify the current configured state of the pool member
Objective 1.07 Identify a persistence issue

Section 2: Troubleshoot basic hardware issues 

Objective 2.01 Perform an End User Diagnostic per F5 documentation and collect the output
Examples

  • Reboot an F5 platform into the EUD
  • Download output from the unit an EUD was run on
  • Interpret the output from an EUD and determine if the test passed or failed

Objective 2.02 Interpret the LCD Warning Messages
Examples

  • Locate the LCD on an F5 Platform
  • Correlate the LCD message to message in the corresponding log file
  • Identify which tasks the buttons on the LCD perform

Objective 2.03 Identify a possible hardware issue within the log files
Examples

  • Indicate which logs would contain debugging information
  • Given a log file, determine the nature of a hardware issue
  • Given a possible issue, determine which log file entries to review

Objective 2.04 Force an active unit to standby under the appropriate circumstances
Objective 2.05 Understand the relationship between interfaces, trunks, VLANs and their status/statistics


Section 3: Troubleshoot basic performance issues 

Objective 3.01 Recognize when a packet capture is needed within the context of a performance issue
Objective 3.02 Use BIG-IP tools in order to identify potential performance issues
Examples

  • Differentiate between performance issue types (i.e. Latency, Congestion, broken content)
  • Establish the frequency of a given issue (random, continuous, isolated, intermittent, repetitive intervals)
  • Explain how to get performance statistics in addition to the those shown in the dashboard (Overview - Performance)


Section 4: Troubleshoot basic device management connectivity issues 

Objective 4.01 Verify remote connectivity to the BIG-IP in order to determine the cause of a management connectivity issue
Examples

  • Isolate potential causes of basic network connectivity issues, given scenarios related to: client configuration, client network access, device network access, network topologies
  • Apply connectivity troubleshooting tools (i.e. ping, traceroute, http/https availability, remote shell access, network based console access) in the appropriate situation

Objective 4.02 Check and interpret port lockdown settings and packet filters in order to determine the cause of a management connectivity issue to a Self-IP
Examples

  • Given a scenario, review port lockdown settings on the Self-IP to determine the cause of the issue
  • Describe appropriate use cases for the use of port lockdown

Objective 4.03 Given the use of a remote authentication server, verify proper DNS and NTP settings in order to diagnose a connectivity issue
Examples

  • Determine whether a filter is enabled
  • Interpret a packet filter rule list in a given situation


Section 5: Open a support ticket with F5 

Objective 5.01 Identify the appropriate supporting components and severity levels for an F5 support ticket
Examples

  • Identify the necessary components for all support cases (Qkview uploaded to iHealth/ or attached to case, serial number of device, problem description, other supporting data)
  • Identify severity levels and the associated response times

Objective 5.02 Given an issue, determine the appropriate severity according to F5 guidelines
Objective 5.03 Provide quantitative and relevant information appropriate for a given issue

  • Examples Distinguish between qualitative/quantitative statements in order to assemble an accurate problem description
  • Distinguish between relevant/irrelevant information in order to assemble an accurate problem description


Section 6: Identify and report current device status 

Objective 6.01 Review the Network Map in order to determine the status of objects
Examples
  • Explain the status icons of objects on the map
  • Explain what virtual servers, pools, nodes and pool members are

Objective 6.02 Use the dashboard to gauge the current running status of the system
Examples

  • Interpret each of the statistic types displayed by the dashboard
  • Given a situation, predict the appropriate dashboard statistics

Objective 6.03 Review log files and identify possible events
Examples

  • Given log file snippets, describe an event sequence
  • Given log file snippets, identify critical events

Objective 6.04 Use iApps Analytics to gauge the current running status of application services
Examples

  • Explain the purpose of iApps Analytics
  • Describe how to capture application statistics
  • Given a current running status, recognize significant statistics


Section 7: Maintain system configuration 

Objective 7.01 Create and restore a UCS archive under the appropriate circumstances
Examples

  • Discuss scenarios in which restoring a UCS archive is appropriate
  • Discuss the tasks involved in successfully restoring a UCS archive
  • Given a scenario, discuss when it is appropriate to create a UCS archive

Objective 7.02 Identify which high-level tasks can be automated using BIG-IQ
Objective 7.03 Manage software images
Objective 7.04 Given an HA pair, describe the appropriate strategy for deploying a new software image
Objective 7.05 Understand the processes of licensing, license reactivation, and license modification (add-ons)
Objective 7.06 Identify which modules are licensed and/or provisioned
Objective 7.07 Explain how to create a user
Objective 7.08 Explain how to modify user properties

Section 8: Manage existing system and application services 

Objective 8.01 Modify and manage virtual servers
Examples

  • Given a proposed virtual server configuration change, outline the scope of the change and for which connections those changes will affect (active connections, new connections, persisted sessions)
  • Given a description of an application, identify the correct virtual server configured for it (HTTP/HTTPS, TCP/UDP, VLANsenabled, route-domain)
  • Given a situation where a virtual server configuration change did not appear to immediately take effect, determine why

Objective 8.02 Modify and manage pools
Examples

  • Distinguish between disabling a member and forcing it down
  • Determine use cases for disabling a member
  • Determine use cases for forcing down a member
  • Given a situation where a pool member has been disabled but still appears to be receiving traffic, determine the cause
  • Articulate the characteristics of a pool member that has been disabled or forced offline (Such as for new connections, persisted connections, etc.)