Junos “flow traceoptions” and managing flow trace “log files”

Junos “flow traceoptions” is the utility to track all routing protocols functionalities such as – how traffic is being traversing from source to destination; how traffic is being traversing from one interface to another; is the traffic able to finds out the correct destination path; what security zones are involved in the traffic path; what security polices are applied; is the traffic getting permitted or getting dropped by a firewall rule; what firewall rules or policies are involved; similar etc.

Three things need to be address while working with flow traceoptions –

  • Need to enable “flow traceoptions” and send the logs to a Flow Trace log file.
  • Analysis the Flow Trace log file to find out the fact what is happening.
  • Make sure to disable flow traceoptions.
  • Once finished with analysis & inspections, cleanup the flow trace log files to maintain available disk space on the Juniper box.

To enable flow traceoptions, following are popular syntaxes-

#set security flow traceoptions file Flow-Trace-LogFile
#set security flow traceoptions flag basic-datapath

#set security flow traceoptions packet-filter PF1 source-prefix
#set security flow traceoptions packet-filter PF1 destination-prefix

#set security flow traceoptions packet-filter PF2 source-prefix
#set security flow traceoptions packet-filter PF2 destination-prefix

Optionally we can enter the following to set limit to be avoid hammered by huge logs.

#set security flow traceoptions file files 2; maximum 3 log files 0,1,2
#set security flow traceoptions file size 2m; size of each log file is 2MB

The above will create log file “Flow-Trace-LogFile”; to see the log file, enter the following command –

>show log Flow-Trace-LogFile

We once we finished analysis & inspections with the log files – we should disable traceoptions as following-

#delete security flow traceoptions

Lastly to clean-up a log file and also to delete log files – use the following commands.

To clear a log file – enter the following command-

>clear log LogFileName

To delete a log file – enter the following command-

>file delete <path>
>file delete /var/log/flow-trace-logs.0.gz


MSSQL 2014 AlwaysOn Availability Group Cluster & Gratuitous ARP (GARP) Issue

MSSQL 2014 AlwaysOn cluster running on Windows 2012 R2 doesn’t send Gratuitous ARP (GARP) packets by default!

I have recently come across gratuitous arp (GARP) issues while working on Microsoft SQL 2014 AlwaysOn Availability Group cluster setup. I experienced the following –

  1. MSSQL 2014 AlwaysOn cluster with AlwaysOn Availability Group (AG) setup was done as per best practices and experts recommendations; all cluster related services were running OK without any issue.
  2. clients sitting on the same IP network/same VLAN were able to connect to the AlwaysOn AG listener Virtual IP (VIP) address immediately after a cluster failover happen from Node-A to Node-B and vice versa.
  3. however, clients sitting on different IP subnets were NOT able to connect to the VIP immediately after a cluster failover.
  4. clients sitting on different IP subnets waited for 20MIN to get connect to the VIP.
  5. this 20minutes is MAC address lifetime on the ethernet switch (I use Juniper EX-series switches) where the servers are connected (connected to physical Hypervisor).
  6. on the network layer the switch “ARP table” was showing previously learnt MAC address for the AG Listener VIP; the switch didn’t updated MAC address after a cluster failover triggered. The switch flushed out the old MAC and re-learnt the new correct MAC address after the MAC age time (20min) expired on the switch.

I was looking for a solution and found “GARP Reply” needs to be enabled on the Juniper EX switch manually – I have done that but still NO improvement!

Also looked at Microsoft KB documents and forums – people are saying GARP needs to be turned on the network switch which I have DONE already without any success.

After doing further digging inside I found that the Windows 2012 R2 servers were not sending any GARP packets so the switch was not updating the ARP table although it is configured to work with GARP.

To get this working – Windows server registry object “ArpRetryCount” needs to be added; Microsoft said the following about this –

“Determines how many times TCP sends an Address Request Packet for its own address when the service is installed. This is known as a gratuitous Address Request Packet. TCP sends a gratuitous Address Request Packet to determine whether the IP address to which it is assigned is already in use on the network.”

Add the registry entry as following –

-REG_DWORD > ArpRetryCount
-Value is between 0-3 (use value 3)

0 – dont send garp
1 – send garp once only
2 – send garp twice
3 – send garp three times (Default Value – actually not present on Windows 2012 R2)

To enable “GARP reply” on Juniper EX & SRX platform – user the following command –

#set interface interface_name/number gratuitous-arp-reply

The interface can be a physical interface, logical interface, interface group, SVI or IRB.

To enable GARP on Cisco IOS – use interface command “ip gratuitous-arps“.


Juniper SRX – replacement of a node in chassis cluster with IDP installed

One of my chassis cluster node in a SRX cluster was failed. I got a RMA replacement SRX box from Juniper. When I try to put the new device (a brand new SRX) to the existing cluster by transferring existing configurations to the new device as suggested by Juniper KB – it was failed!

The reason for failure was due to IDP attack signature database (Juniper call it IDP security package) installed on the existing running node (on the cluster) – whereas the new node has no IDP installed on it.

I was thinking of some sort of auto IDP signature sync on the new device as a part of transferring the configuration before putting this to the existing cluster – but couldn’t find any solution. So, I had to manually download and install the same IDP security package onto the new SRX transferred from the existing running cluster node along with the existing configurations.

Here is the total procedure (I am keeping this for my own reference to be used in future):

1. First thing first – wipe out all existing configuration on the new RMA SRX & set root authentication. Also make sure the new node is not connected to the cluster.


#set system root-authentication plain-text-password


2. Configure chassis cluster on the new node. The cluster ID and node ID must be same as the failed cluster node.

>set chassis cluster cluster-id 1 node 0; here cluster-id is 1 & node number is 0

>request system reboot

3. Download IDP security package from the existing cluster node. Download can be done using SSH/SFTP (you can use FileZilla or WinScp or Mac/Linux scp command) to connect & download the IDP security package.

The attach signature database is located at “/var/db/idpd/sec-download/*“. You can download the whole “sec-download” directory. Once download is done, copy it to an USB stick (should be formatted with FAT32).

4. Transfer & install IDP security package to the new SRX device.

Plugin the USB to the SRX; mount it and copy the content to the same destination folder “/var/db/idpd/sec-download/“.

>start shell

%mkdir /var/tmp/usb

%mount -t msdosfs /dev/da1 /var/tmp/usb

%cd /var/tmp/usb/sec-download

%cp -R * /var/db/idpd/sec-download/

5. Install the IDP security package on the new SRX device.

>request security idp security-package install node 0

>request security idp security-package install status

>request security idp security-package install policy-templates node 0

>request security idp security-package install status

Confirm installation is done successfully (you should see something like following)-

>show security idp security-package-version 



     Attack database version:2660(Tue Mar  1 01:09:02 2016 UTC)

     Detector version :12.6.160151117

     Policy template version :2660

6. Now download the current running configuration from the existing cluster node.

Following command will create a copy of all configuration-

#save /var/tmp/config-backup-ddmmyy

Connect to the running device using FileZilla or similar on to SSH/SFTP port; download the “/var/tmp/config-backup-ddmmyy” file. Transfer the file to USB stick (should be formatted with FAT32).

You should not make any configuration change to the running device at this point.

7. Load the downloaded configuration to the new SRX device via USB.

Plugin the USB to new SRX box.

>start shell

%mount -t msdosfs /dev/da1 /var/tmp/usb



#load override /var/tmp/usb/config-backup-ddmmyy


Now power off the new SRX new and get ready to add this to the existing cluster.

>request system power-off

8. Connect all the network cables “same as before”. Power on the new device.

9. Check cluster status – both the nodes should be back online.

>show chassis cluster status

Thats all!