Defcon-1-Logo

           [Home]    [FBSD Articles]    [Scripts Corner]    [Contribute]    [Search]    [FBSD Links]    [Files]

About Us

FreeBSD Articles
  *Hardware
  *Networking
  *Security
  *Software
  *X Windows


Files / Scripts
Newbies Corner
Tech. Talk
Tips and Tricks


FreeBSD Links

Articles in other
Languages :
  *French Articles
  *Spanish Articles

Want to Help ?
 
   Click Here

Email Users5

Search:
 

 


FreeBSD Search:


 

 

Powered-By-Apache-Logo

6.3.Anatomy of a Dynamic Rule

Let us take a look at what it may exactly look like when one lists
their stateful firewall rules:

00100 allow ip from any to any via lo0
00200 deny ip from any to 127.0.0.0/8
01000 check-state
02000 allow tcp from any to any keep-state
65535 deny ip from any to any
## Dynamic rules:
02000 9 1255 (T 54, # 0) ty 0 tcp, 192.168.0.1 2007 <->
204.71.200.245 80

We are already familiar with the static rules, however, this is
the first time we're showing an example of a dynamic rule. Let us examine
it closely:

The first part of the dynamic rule is the rule # of the static
rule that started it, in this case, rule 2000, which has a "keep-state"
option. The second part is the count of bytes that have been sent out
through that dynamic rule, and the third part is the count of bytes that
have been received through that rule. In the parentheses, the 'T' value is
the timeout value - the rule lifetime - in seconds. In this case, 54
seconds are life for the rule. The hash mark indicates the rule number, in
this case being rule 0. The 'ty 0' part indicates what type of dynamic
rule this is. The rule type corresponds to the flow of the rule - whether
it allows traffic only from source to destination, the other way around,
or both (bidirectional). Currently, only one type is available, which is
the default: bidirectional. This is visually indicated by the "<->" symbol
between the source and destination IP:port. After the type, we see the
protocol that the dynamic rule passes through, followed by the source
IP:port, a bidirectional indicator "<->" as mentioned above, and finally
the destination IP:port.

Even after dynamic rules timeout, you will still see them listed
with "ipfw list," although, with a 0 T value. Once a rule times out, it
will no longer accept packets as it would have normally unless it is
revived with the same static rule with a "keep-state." Also, once they
timeout, they can be replaced by newly activated dynamic rules. Unless all
of the dynamic rules are alive, they will be continuously replaced with
new ones, especially so as the number of dynamic rules approaches the
maximum.

Once many dynamic rules are created it may become somewhat of a
nuisance to list the rules with 'ipfw list' as all of the dynamic rules
will stream off the terminal. To only list the static rules, one can do
something like:

ipfw list | grep -v '[<->#]'

Or, if one wishes to page down all of the rules, both static and
dynamic, one can:

ipfw list | more


7.Traffic Shaping

Traffic shaping refers to the controlling of traffic in various
ways, such as bandwidth capping, delays, flow queues, and so on. It allows
one to control the general intensity, direction and breakup of the
traffic. Since the introduction of dummynet(4) in FreeBSD, extensive
traffic shaping capabilities have been available. Indeed, this is another
region in which IPFilter is unable to offer corresponding functionality.
If one needs traffic shaping capabilities to control individual user
bandwidth consumption, or enable delays in traffic for experimental and
testing purposes, one must use dummynet(4) with ipfirewall(4), with only
one exception: probability matching, which is supported completely by
ipfirewall(4) without dummynet(4).

No traffic shaping rules can use dynamic rules, because as each
dynamic rule is created it will not observe the bandwidth caps, delays,
separate flow queues, etc.

7.1.Probability Matching

ipfirewall(4) supports a useful tool for networking testing by
allowing one to simulate random packet drops at various probability
ranges. This option uses the keyword "prob" followed by a floating point
number between 0 and 1 which corresponds to the probability with which
packets will be passed. So, a "prob 0.9" will pass packets matched by that
rule with a 90% probability, while a "prob 0.1" will do so with a 10%
probability. The following is an extended syntax for ipfw(8) for use with
native ipfw(8) probability matching:

        <command> [<rule #>] [prob <match_probability>] <action> [log
[logamount <number>]] <proto> from <source> to <destination>
[<interface-spec>] [<options>]

For example, if we wished to drop ICMP echo requests 20% of the
time, we could have the following rule:

add 1000 prob 0.8 allow icmp from any to any in icmptypes 8

Or, perhaps, we may wish to deny 50% of TCP SYN packets to the web
server to simulate heavy web traffic via the ep0 interface:

add 1000 prob 0.5 allow tcp from any to any in setup via ep0 


7.2.Dummynet

All additional traffic filtering capabilities require dummynet(4),
which was introduced into FreeBSD in version 2.2.8. Before they can be
used, the kernel has to be compiled with the following option:

optionsDUMMYNET

Once compiled in, one will be able to specify pipes for traffic
control. A pipe is a traffic shaping rule that controls the traffic in the
specified manner, and is created with the ipfw(8) "pipe" command. Traffic
is redirected to pipes with ipfw(8) and the use of the "pipe <pipe #>"
action. First let us construct a simple pipe (note: each "pipe" command
must be preceeded with a call to /sbin/ipfw: ipfw pipe # ...):

pipe 10 config bw 100Kbit/s

This simple pipe will cap traffic flowing through it to a maximum
of 100 Kilobits per second. There are a several different ways to indicate
bandwidth measure: bit/s, Byte/s, Kbit/s, KByte/s Mbit/s, MByte/s. Each
bandwidth limiting pipe must use the "bw" keyword.

Another way to control traffic is to use a delay, which could be
used to simulate system lag:

pipe 10 config delay 100

The value following "delay" is in milliseconds. In this example,
all traffic moving through this pipe will be delayed 100ms. We could also
accomplish the same thing as the "prob" indicator built into
ipfirewall(4) with the "plr" pipe keyword. For instance, to simulate 20%
packet loss as we did with "prob 0.8," we could construct the following
pipe:

pipe 10 config plr 0.2

"plr" stands for "packet loss rate" so the value indicates at what
rate packets will be lost, while the "prob" keyword for ipfw(8) indicates
the probability with which packets will pass. Therefore, "plr" values are
(1 - "prob") values. To simulate 20% packet loss with "prob" we indicate
that 0.8 of the traffic will make it through; with "plr" we indicate
that 0.2 of the traffic will not make it through. Try not to get confused
by the difference.


7.2.1.Pipe Queues

Next, one may need to control the queue sizes of their pipes,
especially if the MTU of their network device is relatively large. The MTU
of a network device defines the "maximum transmission unit" for that
interface, or, in other words, the maximum size a packet can take on that
interface. To learn the size of the MTU of a given network interface one
need only use ifconfig(8) to view its info; for instance:

(root@nu)~># ifconfig xl0
xl0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
        inet 192.168.0.1 netmask 0xffffffe0 broadcast 192.168.0.31
        ether 00:70:18:d4:a4:ac
        media: 10baseT/UTP (10baseT/UTP <half-duplex>)
        supported media: 10base5/AUI 10baseT/UTP <full-duplex> 10baseT/UTP
<half-duplex> 10baseT/UTP
(root@nu)~>#

Here we see that the NIC's MTU is 1500 (bytes). Indeed, the MTU
for all ethernet devices is 1500 bytes.

Queues are used by pipes to enforce the configured bandwidth
limitations and delays. Queues can be configured by either specifying
their size in "Kbytes" or slot amounts. Slots correspond to packets; in
other words, specifying that a queue have "10" slots is to specify that it
can only hold 10 packets. The maximum size of the packets is defined by
the MTU of the given network device. For ethernet devices, the MTU is 1500
bytes. This means that if one specifies that a pipe use a queue with 10
slots on an ethernet network, the queue size would be 10 x 1500 bytes, or
15 Kbytes.

This is important to understand because the default queue for
pipes is 50 slots and it may especially be too large on network devices
with a large MTU and a very low bandwidth limitation. 50 was chosen as the
default because it is the typical queue size for ethernet devices. Under
normal circumstances it is imperceptible, however, when small bandwidth
limitations are imposed it'd take a long time to fill the queue and this
would create horrible network delays. For instance, if we set the
following pipe on an ethernet LAN to simulate a 56K modem:

pipe 10 config bw 56Kbit/s

... and we do not set a smaller MTU for the device with
ifconfig(8) or set a smaller queue (preferred approach), the queue into
which packets would be pumped for the pipe to enforce its bandwidth
limitation would be 1500 bytes (12000 bits) x 50 = 600Kbits. For a pipe
which is capping the bandwidth to 56Kbit/s it would take roughly 10.7
seconds (600Kbits / 56Kbit/s) to fill a 600Kbit queue. Such delays would
be a severe monkey wrench in the experiment. To avoid such complications,
it is strongly advisable to manually set the queue sizes that each pipe
uses. As we have noted earlier, the default queue size is set using slots
- 50 of them. The queue can also be set by specifying a size in "Kbits".
This latter approach is safer because using slots to specify the queue
size leaves the open variable of MTU size, which, if one isn't paying
attention, could cause additional complications. The smaller the bandwidth
cap, the smaller the queue size should be. For instance, using our above
example, a reasonable configuration would be:

pipe 10 config bw 56Kbit/s queue 5Kbytes


7.2.2.Pipe Masks

A powerful capability of pipes is to allow multiple queues per
traffic flow. For instance, if one has several boxes behind a firewall and
wants to cap the bandwidth of each one to 100Kbit/s and not just the
aggregate bandwidth of all traffic moving through the gateway, one can
either manually set up a pipe and ipfw rule for each server, or instead,
use dynamic queueing with pipe masks. The masks define which hosts belong
to the same pipe, just as netmasks and bitmasks define which groups of
hosts belong to the same network/subnet.

Masks can be specified in six different ways:

"dst-ip" - mask for the destination IP of the packets being sent
through the pipe.
"src-ip" - mask for the source.
"dst-port" - mask for the destination port.
"src-port" - mask for the source ports.
"proto" - mask for the protocol.
"all"- mask for all hosts; specifies that all bits in all fields
(dst-ip, src-ip, etc) are significant.

For example, let us take the above example of a network behind a
firewall for which all of the hosts are desired a 100Kbit/s bandwidth
cap. If we simply send all traffic through a pipe like described earlier,
the cap will be applied to the aggregate traffic from all of the hosts and
not each one individually. To apply masks to the hosts such that each
host's traffic is sent into a separate queue and applied the bandwidth
limit separately, one could do the following:

pipe 10 config mask src-ip 0x000000ff bw 100Kbit/s queue 10Kbytes
pipe 20 config mask dst-ip 0x000000ff bw 100Kbit/s queue 10Kbytes
add 1000 add pipe 10 all from 192.168.0.0/16 to any out via <device>
add 2000 add pipe 20 all from 192.168.0.0/16 to any in via <device>

At first glance this may seem confusing. We have also for the
first time included the ipfw(8) rules that divert packets to the pipes. We
did this because the two pipe rules on their own would not make as much
sense without seeing what we are diverting to them. Pipe 10 caps the
traffic passing through it to 100Kbit/s as well as does the pipe 20. Rule
1000 diverts its traffic to pipe 10, and rule 2000 to pipe 20. Rule 1000
matches all traffic moving out and rule 2000 matches all traffic moving
in. There are two reasons to have a pipe for incoming and outgoing
traffic, but one will be addressed later. The primary reason that one
needs to concern oneself with now is that each pipe configures a different
mask. Pipe 10 configures mask 0x000000ff for source addresses; because
rule 1000 diverts traffic leaving the internal network, the mask *must* be
applied to source addresses if we wish to break the flows from each
internal network host to a separate queues. Likewise, for traffic coming
in, the queues must be broken up according to the destination addresses
behind the firewall.

As you noticed, we specified the masks in hexadecimal instead of
decimal. Either should work. The masks work in the exact same manner in
which netmasks work; this becomes clear when we realize they're done in
reverse. When netmasking we are trying to break up hosts into groups so
the high bits are at the beginning, here we are trying to break up a group
into hosts, so the high bits in the mask will be near the end. Observing
this reverse goal, it makes sense that the pipe masks look backwards from
netmasks. The hex mask we specified corresponds to a decimal mask of
0.0.0.255. In simple terms, the last octet indicates that only one host
should be alotted per queue (256 - 255). Thus, a separate queue for
bandwidth control is set aside for each address that has a different host
number (different last octet). This presumes, of course, that there are no
more than 254 hosts on a network. If there are more hosts, then the mask
must be adjusted. For instance, if there are 254^2 hosts in the network
behind the firewall, then the mask would have to be 0.0.255.255 (0000ffff)
to indicate that any address that has a different bit within the last two
octets must get its own queue.


7.2.3.Pipe Packet Reinjection

Under most circumstances, once a packet is diverted to a pipe, the
traffic shaping configured for that pipe takes effect and rule searching
ends. However, one can have the packet become reijected into the firewall,
starting at the next rule, after it passes through the pipe by disabling
the following sysctl:

net.inet.ip.fw.one_pass: 1


8.Traffic Flow

It must be always remembered that rules that do not specify "in"
or "out" flags will be checked for traffic coming in AND out. This has a
number of implications. For instance, pipes to which rules divert traffic
that haven't "in" or "out" flags will be activated twice, once when
packets leave and once when they enter. In addition, not specifying an
interface with the "via" keword can cause unwarranted confusion. If a
multi-homed system does not have its firewall using "via" then traffic
coming both ways across any interface will be treated with the "in" and
"out" keywords. "in" will both match traffic coming from the outside AND
local network, because both are coming "in"-to the gateway box.

Another concern is with half-duplex and full-duplex connexions. If
inward and outward traffic is diverted through the same pipe, then the
pipe will simulate half-duplex traffic, simply because, a pipe can not
simulate traffic going in both directions at the same time. If one is
simulating ethernet traffic, or simply using the pipes to control ethernet
traffic, then this is not a problem, for ethernet is a half-duplex
network. However, many other network connexions are full-duplex; as such,
it is safer to usually setup one pipe for inward traffic and one for
outward. This is the second reason for having two rules, each controlling
a direction, as was mentioned in the previous section on pipe masks.


Appendix A: Example Firewall Configurations

Here follow a number of scenarios requiring firewalling. Each
scenario is answered by a firewall rulest and a quick explanation as to
how it works. For all examples, 12.18.123.0/24 will be used as the local
subnet, xl0 will be used as the external NIC, and xl1 will be used as the
internal NIC.


Q) How do I block external pings, but allow myself to ping out
to any external host?

A) Stateful solution. The dynamic rules for icmp packets use the
net.inet.ip.fw.dyn_short_lifetime setting, which is 5 seconds by default.
The advantage of the stateful solution is that echo replies from only the
specific host you pinged will be accepted.

add 1000 deny icmp from any to 12.18.123.0/24 in via xl0 icmptypes 8
add 1010 check-state
add 1020 allow icmp from 12.18.123.0/24 to any out via xl0
icmptypes 8 keep-state
add 1030 deny icmp from any to any

The reason for having the deny rule before the check-state rule is
that the dynamic rules are bi-directional. As such, during the check-state
echo requests can come from external hosts and they will be answered;
essentially, during the short lives of the dynamic rules, your host will
be pingable. Because of this, echo requests from external hosts are
filtered prior to the check-state rule.

Stateless Solution. The advantage of the stateless solution is
less overhead because of less rules to process; but, overhead in dealing
with occasional pings shouldn't be an for the most part, so this advantage
is negligible.

add 1000 deny icmp from any to 12.18.123.0/24 in via xl0 icmptypes 8
add 1010 allow icmp from 12.18.123.0/24 to any out via xl0 icmptypes 8
add 1020 allow icmp from any to 12.18.123.0/24 in via xl0 icmtypes 0

The disadvantage of the stateless approach is that it will always
accept echo replies from any host, as opposed to the stateful approach
which will only accept echo replies from the specific host that was
pinged.

Q) How do I block private subnets as defined in RFC 1918 from
entering or exitting my network?

A)

add 1000 deny all from 192.168.0.0/16 to any via xl0
add 1010 deny all from any to 192.168.0.0/16 via xl0
add 1020 deny all from 172.16.0.0/12 to any via xl0
add 1030 deny all from any to 172.16.0.0/12 via xl0
add 1040 deny all from 10.0.0.0/8 to any via xl0
add 1050 deny all from any to 10.0.0.0/8 via xl0

Q) How would I enforce rate limiting on each host in my network
individually? I want to enforce an upstream limit of 64Kbit/s and a
downstream of 384Kbit/s for each host; in addition, I want to disallow all
external hosts from initiating connexions with the hosts on my network so
that no one can run any servers.

A) This might be similar to a setup enforced at a university. It
can be easily set with the following rules:

pipe 10 config mask src-ip 0x000000ff bw 64kbit/s queue 8Kbytes
pipe 20 config mask dst-ip 0x000000ff bw 384kbit/s queue 8Kbytes
add 100 deny icmp from any to 12.18.123.0/24 in via xl0 icmptypes 8
add 110 check-state
add 1000 pipe 10 all from 12.18.123.0/24 to any out via xl0
add 1100 pipe 20 all from any to 12.18.123.0/24 in via xl0
add 1200 allow tcp from 12.18.123.0/24 to any out via xl0 setup
keep-state
add 1200 allow udp from 12.18.123.0/24 to any out via xl0 keep-state
add 1300 allow icmp from 12.18.123.0/24 to any out icmptypes 8
keep-state
add 65535 deny all from any to any

  Lasker


Email Us

ghostrdr@defcon1.org

This site cannot be duplicated without permission

© 1998 - 2010 Defcon1, www.defcon1.org. Copyrights for all materials on this web site are held by the individual authors, artists, photographers or creators. Materials may not be reproduced or otherwise distributed without permission of www.defcon1.org and the content's original author.