Tuesday, August 22, 2017

Check Point Firewall: The Difference Between ZDEBUG, FW MONITOR, And TCPDump

Ok.  I said a few days ago that I would write this post about the differences between these three commands.  Here it is.  I had a lot of info I wanted to put into this, but for the sake of just getting the info out there, I decided to just give the basics of the commands.  Just FYI, these three commands have been very helpful to me in troubleshooting.  And honestly, in the beginning of this, I could only tell you the difference between two of these three commands.  Now, its different and I hope this helps you as well.
FW CTL ZDEBUG is a CLI command that is for seeing dropped packets in real-time on the firewall.  This can include packets that are dropped from the Check Point application OR from the OS of the box.  From the application, this could mean the Rulebase, IPS, etc.  From the OS, this could mean dropped packets due to a full queue, etc.  ZDEBUG is especially helpful in determining the reason a packet is dropped.  The reality is that some packets that are dropped just do not show up in SmartView Tracker. 
Below is an example of some dropped packets and the reasons:

;[cpu_9];[fw4_6];fw_log_drop_ex: Packet proto=6 -> dropped by fw_handle_first_packet Reason: Rulebase drop - rule 10<-- This was dropped because of the Check Point firewall rulebase.  Rule 10 was a rule that it matched and dropped.

;[cpu_10];[fw4_5];fw_log_drop_ex: Packet proto=6 -> dropped by fw_handle_first_packet Reason: Geo Protection<-- Simple enough.  This packet is from Russia, which is blocked on this firewall.

fw ctl zdebug drop is the CLI command.  This captures all packets that are dropped.  You can use the grep option to cut down on the amount of traffic you see and specifically search for traffic you want to see.
fw ctl zdebug drop | grep  will search for any dropped packet with a source or destination IP address of

FW MONITOR is a CLI command that is for packet capturing through the firewall in real-time.  This command does not show dropped packets.  fw monitor allows you to capture packets at multiple capture positions within the FireWall-1 kernel module chain; both for inbound and outbound packets. This enables you to trace a packet through the different functionalities of the firewall. The primary mode of troubleshooting would be to use the something like the following to see packets for source of or destination of
fw monitor -e "accept src= or dst=;"  This will show you the stages of the IP of as a source or destination. 

Most of the time, you want to see the packet go all the way through the kernel.  Your command might look something like this: 
fw monitor -e "accept host (;"  This will show you the 4 stages that this particular IP goes through, and is most likely what you will use the most.  You are basically looking at this view of the packet traversal below.  This will help you determine if packets are coming through, and if NAT’ing and routing is working.  

You can also expand this view by using the –p all option, as show below:
fw monitor –p all -e "accept host (;" 
You are basically looking at a multiple point view of the packet traversal through the firewall:

TCPDump is a CLI command that allows you to capture packets on the interface.  You see packets, real-time, as they hit the interface, but not through the firewall.  Only on the interface is where you are capturing on.  This is similar to the way packet captures work on a Cisco ASA or what you would see in Wireshark.  If you see a packet coming in an interface, but not out an interface, you will probably need to run the fw monitor command to find out where it is failing.  If you suspect dropped packets, you can use the zdebug command.
tcpdump -i eth1 host     <---- Tells to monitor eth1 for this hosts.
'tcpdump -i' captures traffic on specific interface.
'tcpdump -e' displays Source and Destination MAC addresses.
CTRL+C stops 'tcpdump'.
By default, only the first 68 bytes of every packet are captures, unless the capture size is increased with '-s' flag. For users running without data encryption, passwords are also copied into this file. 

Monday, August 21, 2017


As some of you know, I'm currently working on the CISSP certification. And during this time, I have asked myself this question: Why does a technical resource need this certification?
I'm sure some people will disagree, but for me, the answer is:  They don't.
I'm still going to get it, because I know people want you to have it with what I do for a living.  But honestly, the topics on the CISSP exam do not reflect what a technical person really needs to know.
However, for management, yes, I can see it. They DO need this cert, based on what I'm studying. And to me, it should be a requirement of any CISO who is actively working as a company security policy manager (because policy is what they really are supposed to do). What I'm studying is about policy, not how to stop someone from hacking into the network or even best practices with config.
CISOs get paid a lot of money. You need to require them to have this cert.

Sunday, August 20, 2017

Sunday Thought: Acts 10:15

In a time of so much hostile fighting and so much backbiting here within the US, this is an interesting thought. When the Gentiles and the Jews didn't necessarily get along, God was very clear to Peter in this message:

Saturday, August 19, 2017

Pic Of The Week: Dust

 Remember when I talked about vacuuming out my vents when working on my HVAC? Well, this was one of the intakes. This is an old 1950s house, and it's probably never been cleaned out since the duct work was put in.

Friday, August 18, 2017

Quote For The Day: 50

 “Never, ever give up. There will be times in your life you'll want to quit, you'll want to go home, you'll want to go home perhaps to that wonderful mother that's sitting back there watching you and say, 'Mom, I can't do it. I can't do it.' Just never quit. Go back home and tell mom, dad, ‘I can do it, I can do it. I will do it.’ You're going to be successful.”. ~~ Donald Trump

Thursday, August 17, 2017

Check Point Firewall: Difference Between "fw mon", "zdebug" And "TCPDump"

I've decided that there is just some documentation that is missing on a few topics. The difference between these Check Point commands (fw monitor, zdebug, and tcpdump) is something that needs some explaining. I'm putting this together and will have this one up in a few days.  Stay tuned...

Wednesday, August 16, 2017

Quote For The Day: 49

 “Remember this: Nothing worth doing ever, ever, ever came easy. Following your convictions means you must be willing to face criticism from those who lack the same courage to do what is right.” ~~ Donald Trump

Friday, August 11, 2017

Check Point Firewall: tcpdump In CLI

I've had to do some troubleshooting on a network issue recently, where I needed to do a tcpdump to verify that the packets were actually leaving the firewall.  It is.  You can see it coming in from the private IP of, then being NAT'ed to the public and on to

[Expert@CheckPoint1:0]# tcpdump -i any -vvv dst
tcpdump: WARNING: Promiscuous mode not supported on the "any" device
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 96 bytes

12:25:20.917906 IP (tos 0x0, ttl 127, id 8280, offset 0, flags [none], proto: ICMP (1), length: 60) > ICMP echo request, id 1, seq 19, length 40
12:25:20.918146 IP (tos 0x0, ttl 126, id 8280, offset 0, flags [none], proto: ICMP (1), length: 60) > ICMP echo request, id 46253, seq 19, length 40
12:25:21.919046 IP (tos 0x0, ttl 127, id 8285, offset 0, flags [none], proto: ICMP (1), length: 60) > ICMP echo request, id 1, seq 20, length 40
12:25:21.919096 IP (tos 0x0, ttl 126, id 8285, offset 0, flags [none], proto: ICMP (1), length: 60) > ICMP echo request, id 46253, seq 20, length 40
12:25:26.863785 IP (tos 0x0, ttl 127, id 8291, offset 0, flags [none], proto: ICMP (1), length: 60) > ICMP echo request, id 1, seq 21, length 40
12:25:26.863884 IP (tos 0x0, ttl 126, id 8291, offset 0, flags [none], proto: ICMP (1), length: 60) > ICMP echo request, id 46253, seq 21, length 40
12:25:27.862351 IP (tos 0x0, ttl 127, id 8293, offset 0, flags [none], proto: ICMP (1), length: 60)> ICMP echo request, id 1, seq 22, length 40
12:25:27.862397 IP (tos 0x0, ttl 126, id 8293, offset 0, flags [none], proto: ICMP (1), length: 60) > ICMP echo request, id 46253, seq 22, length 40
12:25:30.053671 IP (tos 0x0, ttl 127, id 8304, offset 0, flags [none], proto: ICMP (1), length: 60) > ICMP echo request, id 1, seq 23, length 40
12:25:30.053732 IP (tos 0x0, ttl 126, id 8304, offset 0, flags [none], proto: ICMP (1), length: 60) > ICMP echo request, id 46253, seq 23, length 40
12:25:31.066946 IP (tos 0x0, ttl 127, id 8306, offset 0, flags [none], proto: ICMP (1), length: 60)> ICMP echo request, id 1, seq 24, length 40

Thursday, August 10, 2017

Wednesday, August 9, 2017

Home Projects: Killing Roaches

If it's one thing I don't care for, it's roaches. Here in central Alabama, there are a lot of them. And sometimes, they get into the house.
So, it's time to kill them. I found a mixture that is supposed to do the trick:
1 part boric acid
1 part sugar
1 part flour
Mix all three together and you get the final product. (Don't mix this where you eat.)  If you have carpet or hardwoods, put this on something like aluminum foil, to protect your floor.
I set some of this mixture outside to test this out, and sure enough, I saw five roaches at one time of checking it, all on top of the mixture.  I read that it takes killing three generations of them to totally get rid of them inside (6 to 8 weeks).  Let's see how this works out.

Tuesday, August 8, 2017

The Data Center Walk-through...

One thing I'm making sure I do at one customer in particular, is a walk-through of the data center.  Once a week, one member of my team walks through the customer data center.  Its important.  There is a lot of gear in this data center.  Probably around 40 racks or so.
What are we looking for?  Lights.  Or, missing lights.  Amber lights.  Anything that doesnt look right.  I know that in the top of each rack, I expect to see 6 power supply lights.  And in some racks, 8 power supply lights.  I know that for each Aggregation and Core switches (12 total), that I should see 4 power supply lights, 2 supervisor lights, 6 lights across the top front of the Nexus gear, and I should never see an amber light, or one that is out (or blinking).
Every rack has a certain amount of "green lights" on it.  Even the sound can be help you determine if something isn't right.  And if you look at these often enough, you start to notice when something doesn't "look right" or "sound right".  It actually gets easy to see when something isn't functioning correctly.  For example, I can tell you within a few seconds if a power supply has gone bad, just by glancing at the rack.  I've trained my eyes on what to look for, and its a huge benefit to the operation of the data center.  Could I automate this?  Probably.  But I would rather "know" the data center myself instead of depending on even more electronics and software to tell me when it thinks something is wrong.
Get to know your data center (or closets) more intimately.  You will be glad you did.

Monday, August 7, 2017

Home Projects: A Leaky Roof

In that old '35 house of mine, I had a place in the roof that had a leak. I had been up in the attic myself to look for this leak, but I just couldn't find it. So I had to call a roofer to look at this and get it fixed, but I learned something the other day that I just hadn't thought of.
See this picture below. That roofer came out and he immediately saw the problem after climbing up on top of the roof. He saw this carpenter nail, just below one of the shingle lines. Notice the tip of the nail. Its rusty.  Water has definitely been there. So some liquid tar and a few minutes later, leak fixed.

Thursday, August 3, 2017

Pic Of The Week: A Good Day

My home security caught this image this morning.  Its me, starting the day off consulting and a lunch at the shooting range for target practice with my son-in-law. Not a bad day.

Wednesday, August 2, 2017

Check Point Firewall: Modifying The FWKERN.CONF File To Overcome Dropped Packets From The Queue Buffer

Here recently, I had a server guy come to me and tell me that he needed some network help to get an issue of his resolved.  Long story short, his NetApp replication from one site to another was failing, and he couldn't find anything wrong in his configuration to solve the issue.  After troubleshooting the firewall and network from my perspective, I didn't see anything wrong either.  This, needless to say, did not help him out any.
However, after further review, I found that the reason I didn't see anything in my firewall logs was because it wasn't making it to the Check Point application itself.  There actually were dropped packets, just at the OS level.  This took some time to troubleshoot, but what we found was that the queue limit buffer was getting too much traffic and was dropping packets.
So, what did we do?  Well, the default queue limit is set to 2048 by default (in Gaia on the Check Point appliances).  We wanted to up that limit to 8196, since we had plenty of memory to do so (don't do this unless you know for sure you have plenty of resources, as this may not resolve your issue).  In this case, my CPU (CPU #1) was consistently hitting 100% utilization.  So, time to edit the fwkern.conf file.
After logging into Check Point in CLI, and going into expert mode, I then went to /var/opt/fw.boot/modules directory.  There, the fwkern.conf file resides.  I went into VI editor and put in the following:
fwmultik_input_queue_len = 8196

After coming out of VI editor and rebooting the HA cluster, everything worked well and his NetApp issue was resolved.  No more dropped packets from the buffer and CPU down to 10%.  To check what your setting is at, do the following:
[Expert@CheckPoint:0]# fw -i k ctl get int fwmultik_input_queue_len
fwmultik_input_queue_len = 2048

Monday, July 31, 2017

When It HVAC Rains, It Seems Like It Pours

Ok, I'm not a fan of HVAC problems. But while I was in my basement last night, I noticed water on my basement floor. Ugh.
It was coming from where the coil was housed. I've seen this before, but couldn't remember what the problem was.  See below in the first picture. But then my wife reminded me that the hose line that directs condensation out of the unit was probably clogged up. And she was right. So I took our shop vac and put the hose on backwards so it would blow air out instead of vacuum and connected it to the water line. It cleared the line right out and no more clog.