Quantcast
Channel: Syed Jahanzaib – Personal Blog to Share Knowledge !
Viewing all 409 articles
Browse latest View live

Hotspot User Change Password FORM for ‘User Self Management’

$
0
0

How to provide Mikrotik Hotspot Users an option to change there password using any form or web page?

The simple answer is to configure USER MANAGER and provide User Panel which is very nice and informative, it also allows users to change there password too, but what if you don’t want to install User Manager, or what if user also change his information via the user panel which you don’t want them to ?? since mikrotik source code is not public so we cannot hide that option (as far as in my limited knowledge) . Using the form base technique you can simply give them a web page from where they can simply change there password when required.

You can also add more functions in this page ,like it can send an email or add any entry in log file so that admin can be aware that at which time the last password was changed or other functions as required.

This is a simple password change form for hotspot users, After they logged in to hotspot , they can change there own password using this simple form.

REQUIREMENTS:

- Linux base system (I used UBUNTU, but you can use any flavor of your own choice)
- Apache / PHP5.x / PEAR2 library

Also Make sure you have enabled the API service in MIKROTIK
/ IP > Services
As showed in the image below …

1

LINUX SECTION

First Update your Ubuntu (if its not already updated on fresh installation)

apt-get install update

Now Install Apache Web Server with PHP5

 apt-get install apache2 php5

Don’t forget to restart the apache2 service, otherwise when you will try to open the password change form, it will ask you to save the file, instead of opening it on the browser :D

service apache2 restart

Now we have to download PEAR2 support library for the RouterOS functions to be performed via WEB,
Goto your web folder and download pear2 library, and extract it

cd /var/www
 wget http://wifismartzone.com/files/linux_related/pear2.tar.gz
 tar zxvf pear2.tar.gz

Ok now it’s time to create the change password page so that user can access it or you can link it with your status page for the user comfort level.

touch /var/www/changepass.php
 nano /var/www/changepass.php

and paste the following code.
{Make sure to change the IP address of Mikrotik and its admin ID Password}

<?php
use PEAR2\Net\RouterOS;
require_once 'PEAR2/Autoload.php';

$errors = array();

try {
    //Adjust RouterOS IP, username and password accordingly.
    $client = new RouterOS\Client('192.168.30.10', 'admin', 'admin');

    $printRequest = new RouterOS\Request(
        '/ip hotspot active print',
        RouterOS\Query::where('address', $_SERVER['REMOTE_ADDR'])
    );
    $hotspotUsername = $client->sendSync($printRequest)->getArgument('user');
} catch(Exception $e) {
    $errors[] = $e->getMessage();
}

if (isset($_POST['password']) && isset($_POST['password2'])) {
    if ($_POST['password'] !== $_POST['password2']) {
        $errors[] = 'Passwords do not match.';
    } elseif (empty($errors)) {
        //Here's the fun part - actually changing the password
        $setRequest = new RouterOS\Request('/ip hotspot user set');
        $client($setRequest
            ->setArgument('numbers', $hotspotUsername)
            ->setArgument('password', $_POST['password'])
        );
    }
}

?><!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
    <head>
        <title>Change your hotspot password sample page in PHP / Syed Jahanzaib.PK-KHI</title>
        <style type="text/css">
            #errors {background-color:darkred;color:white;}
            #success {background-color:darkgreen:color:white;}
        </style>
    </head>
    <body>
        <div>
            <?php if (!isset($hotspotUsername)) { ?>
            <?php } else { ?>
<h3>
<pre><span style="color: blue">PA</span><span style="color: red">KI</span><span style="color: purple">ST</span><span style="color: orange">AN</span> <span style="color: green">ZINDABAD</span> ...JZ!!</pre>
<h2>
<br>HOTSPOT ... Sample password change FORM <br><br>
You are currently logged in as "<?php
                    echo $hotspotUsername;
                ?>"</h2>

            <?php if(!empty($errors)) { ?>
            <div id="errors"><ul>
                <?php foreach ($errors as $error) { ?>
                <li><?php echo $error; ?></li>
                <?php } ?>
            </ul></div>
            <?php } elseif (isset($_POST['password'])) { ?>
            <div id="success">Your password has been changed.</div>
            <?php } ?>

            <form action="" method="post">
                <ul>
                    <li>
                        <label for="password">New password:</label>
                        <input type="password" id="password" name="password" value="" />
                    </li>
                    <li>
                        <label for="password2">Confirm new password:</label>
                        <input type="password" id="password2" name="password2" value="" />
                    </li>
                    <li>
                        <input type="submit" id="act" name="act" value="Change password" />
                    </li>
                </ul>
            </form>
            <?php } ?>
        </div>
    </body>
</html>

Now once the user have logged in to hotspot, he can access the page like below.

http://192.168.30.50/changepass.php

As showed in the image below …

changepass

.

.

log

Credits and legal stuff

Author: Vasil Rangelov, a.k.a. boen_robot (boen [dot] robot [at] gmail [dot] com)

Regard’s
Syed Jahanzaib


Filed under: Mikrotik Related

Blocking http/https Facebook via automated address-list

$
0
0

Recently I was working at a remote network of GHANA where a hotspot was deployed for school students and it was a school policy to have a central Filter policy to block access to adult web sites and facebook. Blocking adult web sites was easy by using OPENDNS and force users dns traffic to pass from it, but blocking facebook was a bit tricky as it uses HTTPS and web proxy cannot filter secure traffic. In the past I used few method to block facebook (or likewise) with various methods like .content / L7 filtering, but personally I prefer to have a address-list with the FB server’s ip addresses using automated script.This way I have more control over the block policy.

The below script (which can be scheduled to run after every few 5 or hourly/required basis) will create a address list and later a filter rule will block request going to this address list.

First create the script which will catch facebook.com from the DNS cache and will add it in “facebook_dns_ips” address list.
Open Terminal and paste the following script.

1) SCRIPT:

<br /># Script to add Facebook DNS IP addresses<br /># Syed Jahanzaib / aacable@hotmail.com<br /># Script Source: N/A / GOOGLE : )<br /><br />:log warning "Script Started ... Adding Facebook DNS ip's to address list name   facebook_dns_ips"<br />:foreach i in=[/ip dns cache find] do={<br />:local bNew "true";<br />:local cacheName [/ip dns cache all get $i name] ;<br />:if ([:find $cacheName "facebook"] != 0) do={<br />:local tmpAddress [/ip dns cache get $i address] ;<br />:put $tmpAddress;<br />:if ( [/ip firewall address-list find ] = "") do={<br />:log info ("added entry: $[/ip dns cache get $i name] IP $tmpAddress");<br />/ip firewall address-list add address=$tmpAddress list=facebook_dns_ips comment=$cacheName;<br />} else={<br />:foreach j in=[/ip firewall address-list find ] do={<br />:if ( [/ip firewall address-list get $j address] = $tmpAddress ) do={<br />:set bNew "false";<br />}<br />}<br />:if ( $bNew = "true" ) do={<br />:log info ("added entry: $[/ip dns cache get $i name] IP $tmpAddress");<br />/ip firewall address-list add address=$tmpAddress list=facebook_dns_ips comment=$cacheName;<br />}<br />}<br />}<br />}<br /># FB DNS IP ADD Script Ended ...<br />

 ↓

2) SCHEDULER:

Schedule the script to run after every 5 minutes  (or hourly basis)

/system scheduler<br />add disabled=no interval=5m name=fb-script-run-schedule on-event=facebook-list policy=ftp,reboot,read,write,policy,test,winbox,password,sniff,sensitive,api start-date=feb/11/2014 start-time=00:00:00<br />

3) FILTER RULE:

Now create a FIREWALL FILTER rule which will actually DROP the request going to facebook_dns_ips address list.
[Make sure to move this rule on TOP , or before any general accept rule in Filter section)

/ip firewall filter<br />add action=drop chain=forward comment="Filter Rule to block FB adress LIST : )" disabled=no dst-address-list=facebook_dns_ips

Now try to access the facebook, it will open as usual, but as soon as the script will run, a address lsit will be created with the FB ip address list, & its access will be blocked. If you are running the script manual, you should execute it several times (while accessing the FB) and it will add all detected IP's into the list. or if its run by scheduler, then just leave it, it will auto update its status by running after every 5 minutes or likewise to add the ips to the list.


As showed in the image below ...

fb-script-address

filter-rule

.

TIME BASE FILTER RULE

You can also use this technique to block FB in some specific timings only. For example you want to block access to FB from 9am to 10:am then use the following filter rule.

/ip firewall filter<br />add action=drop chain=forward comment="Filter Rule to block FB address LIST : )" disabled=no dst-address-list=facebook_dns_ips time=9h-10h,sun,mon,tue,wed,thu,fri,sat

.

Note:

You should force users to use your dns server as there primary DNS server for more heuristic teacher :p

/ip firewall nat
add chain=dstnat action=dst-nat to-addresses=192.168.1.1 to-ports=53 protocol=tcp dst-port=53
add chain=dstnat action=dst-nat to-addresses=192.168.1.1 to-ports=53 protocol=udp dst-port=53
 
Also if you want to block the access for some users only, then simply create a address list with those users (or pool) and in src-address (or src-address list) add them specifically.

Regard's
Syed Jahanzaib


Filed under: Mikrotik Related

Quick Note on Winbox Save Password Security Issue.

$
0
0

I know its not recommended to save the password in mikrotik WINBOX (as password are stored in clear text form in winbox.cfg in local pc user profile), But we HUMANS love being lazy enough or with weak memory sometimes prefer to save the password and the management PC and sometimes this PC is also shared by some other co-admins/colleagues dueto lack of resources :p

In my opinion, It could be annoying backdoor / password leak issue by WINBOX.

winbox-security-issue

Mikrotik developer should really focus in this section , and encrypt the password using strong hash algorithm. I used it few months back at a friend’s admin PC to fetch the iD password with all details as showed in the image. Just imagine what will happen if it fall into wrong hands …

Reference: http://forum.mikrotik.com/viewtopic.php?f=2&t=81816

Regard’s
Syed Jahanzaib


Filed under: Mikrotik Related

Mikrotik Script to Export PPP users to USER MANAGER

$
0
0

As requested Following is a quick and dirty way to export Mikrotik Local PPP (pppoe) users to USER MANAGER with same profile assigned as LOCAL profile section . I used the word dirty because there is no officially supported method that we can use by single CLI command or one window GUI.

Consider the following scenario:

Mikrotik is configured with PPPoE Server , and have two profiles with the name of 512k and 1mb and 6 users in ppp section …
As showed in the image below …

2-mt-profile

3-users-mt.

Our task is to migrate all local ppp users to USERMAN with minimum overhead management of manual workout.

First Open User Manager, and configure /add the NAS , so that Mikrotik can communicate with the UserMAN and wise-verse.

Now add same profiles in User Manager as present in the local Mikrotik PPP Section.
[This task can be done via CLI too, example is in the end]
As showed in the image below …

1b-userman-profiles.

.

Now as far as my dumb mind goes, I couldn’t found a way to assign profile to user using /tool userman menu, so to overcome this issue, I first created two users with same profile name and id.

Example if profile name is 512k, then create a user with name”512k” , it will be used as a master copy for cloning :D
As showed in the image below …

1-userman.

.

Userman section is done , moving to Mikrotik Section…

Goto System > Scripts and add new script, use the following code…

# PPP Export to USERMAN SCRIPT START
:log error "Make sure you have usermanager configured properly and created same profile names with same user name (master users for cloning) in USERMAN / Jz"

# Applying Loop for ppp secret section to fetch all user details
/ppp secret
:foreach i in=[find] do={
:local username [get $i username]
:local pass [get $i password]
:local profile [get $i profile]
:local comment [get $i comment]

#Printing User names and other details for record purpose ...
:log warning "Fetching USER details from /ppp secret section , Found $name $pass $profile $comment for EXPORT"

#Creating Users in User Manager with ID / Password / Profile and Comments ...
/tool user-manager user add name=$name password=$pass customer=admin copy-from=$profile comment=$comment
}
:log error "DONE. Script END. Now logout from USERMAN and RE login and check users section"

# Script End.

the result would be something like …
As showed in the image below …

4-log.

.

Now log-out from the User-manager, and re login , and check USERS Section again :)
the result would be something like …
As showed in the image below …

5- user-end.

.

This is just an example, you can do much more by adding various functions or variables/constrains to the script :)

Example for CLI base profile addition.

/tool user-manager profile
 add name=512k name-for-users="512k Package" override-shared-users=off owner=admin \
 price=500 starts-at=logon validity=4w2d
 add name=1mb name-for-users=1mb override-shared-users=off owner=admin price=500 \
 starts-at=logon validity=4w2d

/tool user-manager profile limitation
 add address-list="" download-limit=0B group-name="" ip-pool="" name=512k \
 rate-limit-min-rx=524288B rate-limit-min-tx=524288B rate-limit-rx=524288B \
 rate-limit-tx=524288B transfer-limit=0B upload-limit=0B uptime-limit=0s
 add address-list="" download-limit=0B group-name="" ip-pool="" name=1mb \
 rate-limit-min-rx=1048576B rate-limit-min-tx=1048576B rate-limit-rx=1048576B \
 rate-limit-tx=1048576B transfer-limit=0B upload-limit=0B uptime-limit=0s
 /tool user-manager profile profile-limitation
 add from-time=0s limitation=512k profile=512k till-time=23h59m59s weekdays=\
 sunday,monday,tuesday,wednesday,thursday,friday,saturday
 add from-time=0s limitation=1mb profile=1mb till-time=23h59m59s weekdays=\
 sunday,monday,tuesday,wednesday,thursday,friday,saturday

.

Remember ….

Sky is the only limit …

.

.

Regard’s
Syed Jahanzaib


Filed under: Mikrotik Related

Radius Manager 4.1 Patch5 Deployment

$
0
0

dma415

DMASOFTLAB released patch 5 for Radius Manager 4.1 version. [Release Date: 10 Feb, 2014]

FIXES, IMPROVEMENTS:

-default service (srvid 0) find users issue problem fixed
-verification code and mobile number fixed in ACP / edit user
-invalid menu.css reference removed (buyiasmain_tpl.htm, adminmainblank_tpl.htm)
-traffic summary per NAS issue fixed
-connection allowed bug fixed
-multiple email address problem fixed in edit and new user forms
-privileged sim-use edit problem fixed
-enhanced syslog alerts [Helped a lot in troubleshooting now]
-swapped SMS / email alerts fixed (ACP / edit user)
-self registration welcome SMS / email issue fixed
-upon user removal accounting details are also deleted from rm_radacct
-duplicate batch billing problem fixed
-auto renewal uses unit fields instead of initial fields
-expired online time yellow color problem fixed in ACP / List users view
-password recovery updates radcheck for regular users only
-hotspot MAC account password change problem fixed (UCP)
-corrected user name in password recovery email
-bulk SMS custom tag issue fixed
-convert card prefixes to lower case in radcheck
-self registration displays user name, password
-zero gigawords issue fixed with a non Mikrotik NAS
-search users leading and trailing space issue fixed
-SMS, email expiry alerts issue fixed
-grace period account disable bug fixed
-negative deposit addition problem fixed [Good news for Alex]
-IAS duplicate mobile number problem fixed
-card generator issue fixed (PIN length > 10)
-next service issue fixed **** This bug was quite annoying and wasted many hours in useless troubleshoot :( Jz
-properly logout grace period expired users
-rmauth IAS and card setup crash fixed
-increased CTS logging capacity (rmconntrack DELAY_KEY_WRITE option)

DEPLOYMENT:

Deployment is fairly simple.
First download the radiusmanager-4.1-cumulative_patch.tgz
Extract it any temp folder

mkdir /temp
cd /temp
wget http://wifismartzone.com/files/rm_related/radiusmanager-4.1-cumulative_patch.tgz
tar zxvf /temp/radiusmanager-4.1-cumulative_patch.tgz
cd radiusmanager-4.1-cumulative_patch.tgz/
ls

You may see following contents

root@rm:/temp/radiusmanager-4.1-cumulative_patch# ls
bin  raddb  readme.txt  www

1. Copy PHP files to /var/www/html/radiusmanager (Fedora) or /var/www/radiusmanager [Debian, Oh yeah, That's my Boy ;)] directory.

For Ubuntu
cp -vrf  www/radiusmanager/*  /var/www/radiusmanager

For Fedora
cp -vrf  www/radiusmanager/*  /var/www/html/radiusmaanger

2. Chmod all binaries to 755:

chmod 755 bin/rm*

3. Stop rmpoller and copy the binaries to /usr/local/bin directory, overwriting the old versions.

service rmpoller stop
cp bin/* /usr/local/bin

4. Copy acct_users to /usr/local/etc/raddb directory.

cp raddb/acct_users /usr/local/etc/raddb

5. Change permission of acct_users by chmod:

chmod 640 /usr/local/etc/raddb/acct_users
chown root.root /usr/local/etc/raddb/acct_users

7. Restart radiusd

service radiusd restart

.

Now relogin to ACP, and hopefully you see the 4.1.5 :D
As showed in the image below …

dma415

Regard’s
Syed Jahanzai


Filed under: Radius Manager

IBM Lotus Domino Fix Packs Upgrade Error

$
0
0

Few days back, I was upgrading Lotus Domino 8.5.3 Fix pack 4 to Fix Pack 6, and during upgrade, I encountered following error …

lotus-upgrade-error

.

To solve it, Make sure that

  • Lotus DOMINO is stopped by using QUITE command in domino console,
  • Lotus Services are STOPPED in services before running the upgrade package
  • Any Lotus CONSOLE is closed
    [I forgot to close the console which resulted in wastage of precious 15 minutes on Live Production Server, anyway this is how you learn things in real life,]

http://www-10.lotus.com/ldd%5Cfixlist.nsf/WhatsNew/2ca7aa993e50ba8285257c1d006472bd?OpenDocument

8.5.3 Fix Pack 6 Preliminary Fix List descriptions:

Client

  • SPR# TSHI8SD538(LO68047) – Fixed an intermittent Notes client crash when opening a corrupted Notes document.
  • +SPR# MLAT99RKAG(LO76668) – Improved javascript disablement and disabled for HTML Email messages (body field and memo form) only. This regression was introduced in 8.5.3 FP5.
  • SPR# ACHG8STC6T(LO68380) – Fixes intermittent Notes Client crash when the user hits “send” on a large email (also the email is lost).
  • SPR# MCHZ8R4HPK(LO67040) – “Search Directory For” results in Typeahead are displayed in Alphabetical Order. (technote 1580001)

Server

  • SPR# KBRN8Q6JXC(LO71360) – Performance and reliability fix to network session code.Prior to this fix, many users accessing a Domino server simultaneously could cause a performance bottleneck resulting in slow server response or timeouts attempting to connect to the server. The error ‘Unable to redirect failover from <SERVERNAME>’ could also appear where SERVERNAME is the same name of the server encountering the issue.
  • SPR# JPAI94HR3N(LO75003) – Fixes potential deadlock on process startup between LkMgr locker and semaphore locker(Directory manager queue semaphore). (technote 1644240)
  • SPR# MYAA8LV385(LO64012) – Fixes an issue where an incorrect warning for a database over quota threshold could be generated.
  • +SPR# RMAA94WKMG(LO73956) – Fixes intermittent Domino Server crash when closing a database. This regression was introduced in 8.5.2. (technote 1644232)
  • SPR# VPRS8YBRZ6(LO71728) – Fixes Domino Server mail relay host crash on router on Jonah::asn_sorted::encode_value
  • +SPR# AJMO8NVM8F(LO66491) – Prevent Directory Assistance on Domino 64-bit servers from doing unnecessary search references and referrals which were leading to “81″ LDAP timeout errors. This regression was introduced in 8.5.
  • SPR# JPMS8KZLLC(LO63217) – Fixes Domino Server crash during database cache maintenance with PANIC: ERROR – LockMemHandle() Handle 0xF0259F47 is not allocated
  • SPR# PPET98CPBN(LO7562) – Security enhancement to scrub query strings causing search to fail; work around is to add the following notes.ini: HTTP_QUERY_STRING_SCRUB=0. This fix changes the default to be off instead of being on and adds new code to prevent security X-Site script attacks against search urls.
  • SPR# AJAS8WSB9B(LO70861) – Prior to this fix multiple “Received” headers could be overwritten by one when retrieving e-Mails with IMAP client.
  • SPR# KHAN87ZUTS(LO55991) – Prevents excessive InsertPermutations recursion that can lead to a Domino Server crash. The new notes.ini variable MAX_PERMUTE_RECURSE=<number>, where <number> limits the number of hierarchical responses that can be added to a given collection, is recommended to be set to 200. (technote 1600317)
  • +SPR# PHEY8UDJYW(LO65911) – Fixes ACL corruption with: “ACL Corrupt in database <Database_Name> creating new ACL with default set to no access”. Now we block unintended deletion of ACL Note that would leave to a DB set to no access.This was a regression introduced in 8.5.3.

iNotes

  • SPR# WRAY8QKLTQ(LO66604) – Fixed issue where when opening messages in iNotes Ultra Light Mode, that have mixed case mail file names specified in the URL, the mail message fails to open.
  • SPR# KRAU8Y2MX6(LO71593) – Fixes issue where the iNotes UI window shrinks to a small size when the iNotes UI is resized several times.
  • SPR# HKOA7T4DN5(LO49113) – Notes web: Fixed an issue where the web browser could hang if a window is resized to or from a very small size.
  • SPR# PTHN96NRTP(LO45468) – Notes web: Fixed an issue where the unread count on a folder is not updated automatically when new messages were transferred into it via a mail rule. Clicking on the folder or using F5 to refresh would update the count.
  • +SPR# HSKM8TN39T(LO68949) – Fixed problem which caused a custom sized table to be inserted in the wrong place in the Rich Text Editor. This is a regression in 8.5.3.

Regard’s
Syed Jahanzaib


Filed under: IBM Related

Blocking Client ROUTER Access

$
0
0

ttl

As requested by a virtual friend, who have a small network in a rural area with lower amount of bandwidth, & he wanted to block access to client who are using WIFI / Client ROUTER and doing sharing with other members. For this reason the operator is loosing ‘POTENTIAL’ customers. Following trick worked like a charm in order to block client router access.

At your main router, add following rule,

/ip firewall mangle
add action=change-ttl chain=forward comment="Block Client NAT/Router  / zaib" disabled=no in-interface=LAN new-ttl=set:1 passthrough=no

The above rule will decrement the TTL by value 1 . This way when the packet will move towards client router, it will not go beyond that point to client. BUT if the client uses normal PC, he will be able to access the internet.

1- block client router

DISCLAIMER:
Do remember one point, the above method is not 100%. There are always workaround for about anything. None of any security is 100% fool proof.
If client uses Mikrotik Router, he can create another mangle rule which can increment TTL value then above restrictions will be useless.Something like following

/ip firewall mangle add action=change-ttl chain=prerouting in-interface=WAN new-ttl=increment:1

lolz

But you can create a script that can keep tracking of another mikrotik box on your network by mikrotik discovery protocol, as only very few admins secure there Mikrotik Router at full extent by blocking discovery, change winbox default ports, block any access on WAN port etc etc.

Happy Fire-walling !!! Jz

Personally I am not in favor of imposing harsh restrictions on clients except for the Bandwidth or Quota, but since Mikrotik is capable of creating solutions out of the box, its just one tiny example ;)

.

Reagard’s
Syed Jahanzaib


Filed under: Mikrotik Related

Display Maintenance Message To Users

$
0
0

This is just a simple reference guide on howto display maintenance notice page for client when the the main internet link is down. You can add many advance functions in it, but I shared this method just to give you an idea that how it an be done. The result is that It can greatly help in reducing clients calls to help line in event of internet downtime.

As someone asked from the FB, I decided to make it public so those who don’t know about it should get a idea on how simple it is to achieve. I implemented this technique at a local network which had a fewer number of clients with an unstable PTCL DSL connectivity. It helped the operator to keep informed about the connectivity status via sms status (GSM modem was attached with mirkotik and the netwatch script also sends sms to the operator about the link status)

The theory is simple, First create a NAT rule that redirects http port 80 requests to your local/external proxy service which deny all requests and redirect to local web server page which shows the MAINTENANCE PAGE. make sure to disable this rule after its creation.

Now create a NETWATCH rule that can keep monitoring any reliable HOST on internet , probably your ISP DNS or GOOGLE DNS , if the link is down, then DOWN script should be triggered which enables the NAT Rule, so in case of any link down all users will be routed to maintenance page, and when the link gets UP, the up script will disable the NAT rule, and internet will start work at user end normally.

Example:

First the NAT rule which actually redirects port 80 requests to internet/external proxy server.
[Make sure the comments remains same in all rules of nat / netwatch, otherwise script will not work.

/ip firewall nat
add action=redirect chain=dstnat comment="Redirect to Proxy" disabled=yes dst-port=80 protocol=tcp to-ports=8080
Now ENABLE web proxy which will deny requests of all users port 80 requests and redirect them to local web server page showing the reason why internet is not working.
/ip proxy
set always-from-cache=no cache-administrator=webmaster cache-hit-dscp=4 cache-on-disk=yes enabled=yes max-cache-size=unlimited max-client-connections=600 \
max-fresh-time=3d max-server-connections=600 parent-proxy=0.0.0.0 parent-proxy-port=0 port=8080 serialize-connections=no src-address=0.0.0.0/ip proxy access</pre>
add action=deny disabled=no dst-port="" redirect-to=10.0.0.1/netdown.html

Now the Netwatch script which will keep monitoring the internet, and act accordingly

/tool netwatch
add disabled=no down-script=":log error \"ISP Link seems to be DOWN  , ENABLING  redirection to proxy so users will see mainteneace page / zaib\"\r\
\n/ip firewall nat enable [find comment=\"Redirect to Proxy\"]" host=8.8.8.8 interval=5s timeout=1s up-script=":log error \"ISP Link seems to be UP again , Disa\
bling redirection to proxy so users internet will start work again. / zaib\"\r\
\n/ip firewall nat disable [find comment=\"Redirect to Proxy\"]\r\
\n"

Result [when the internet link is down]:

Attachment:
rule.png

linkdown-cleint

 

You can achieve the same task with more elegance , more controlled way by using SCRIPTS to do various functions like frequency control , ping multiple hosts instead of single destination, Act according to latency load results, email / sms function, and much much more,
as someone said

Quote:

" SKY  IS  THE  ONLY  LIMIT "

.
.
Regard's
Syed Jahanzaib

Filed under: Mikrotik Related

Mikrotik Hotspot: Different login page for Mobile Users

$
0
0

Recently a Nigerian friend asked about how we can configure different login page for Mobile users , which could be a light weight and customized for mobile/pda screen size with customized welcome message. Following is a quick method on how you can display different login page if user is login from mobile or device, & default login page for desktop users.

This is quick method, but if you want more sophisticated method like detect client by device, then you can use variable functions and act accordingly.

First the logic: You have to create 3 html pages,

1-     login.html
2-    mobilelogin.html
3-    otherlogin.html

1- login.html [Re directer which check user device/screen size]

login.html page is a kind of re directer which will actually check the screen size of client device/screen. If it found it less then 800/600 , it will assume its a mobile device and will redirect to mobilelogin.html,
otherwise it will display another login page otherlogin.html which could be default login page for all.

First create login.html

<script type="text/javascript">
if ((screen.width<=800) && (screen.height<=600)) {
document.location="mobilelogin.html";
}
else {
document.location="otherlogin.html";
}
</script>

♦♦♦

2- mobilelogin.html [lightweight login page for mobile users]

mobilelogin.html is displayed if the client device/screen size is under 800/600. You can modify it as per requirements.

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">

<head>
<title>internet hotspot > login</title>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<meta http-equiv="pragma" content="no-cache" />
<meta http-equiv="expires" content="-1" />
<style type="text/css">
body {color: #737373; font-size: 10px; font-family: verdana;}

textarea,input,select {
background-color: #FDFBFB;
border: 1px solid #BBBBBB;
padding: 2px;
margin: 1px;
font-size: 14px;
color: #808080;
}

a, a:link, a:visited, a:active { color: #AAAAAA; text-decoration: none; font-size: 10px; }
a:hover { border-bottom: 1px dotted #c1c1c1; color: #AAAAAA; }
img {border: none;}
td { font-size: 14px; color: #7A7A7A; }
</style>

</head>

<html>
<body>

<div align="center">
<a href="$(link-login-only)?target=lv&amp;dst=$(link-orig-esc)">Latviski</a>
</div>
<div align="center">
<b><font size="4">mobile user</font></b></div>

<table width="100%" style="margin-top: 10%;">
<tr>
<td align="center" valign="middle">
<div style="color: #c1c1c1; font-size: 9px">Please log on to use the internet hotspot service<br />$(if trial == 'yes')Free trial available, <a style="color: #FF8080"href="$(link-login-only)?dst=$(link-orig-esc)&amp;username=T-$(mac-esc)">click here</a>.$(endif)</div><br />
<table width="280" height="280" style="border: 1px solid #cccccc; padding: 0px;" cellpadding="0" cellspacing="0">
<tr>
<td align="center" valign="bottom" height="175" colspan="2">
<form name="login" action="$(link-login-only)" method="post"
$(if chap-id) onSubmit="return doLogin()" $(endif)>
<input type="hidden" name="dst" value="$(link-orig)" />
<input type="hidden" name="popup" value="true" />

<table width="100" style="background-color: #ffffff">
<tr><td align="right">login</td>
<td><input style="width: 80px" name="username" type="text" value="$(username)"/></td>
</tr>
<tr><td align="right">password</td>
<td><input style="width: 80px" name="password" type="password"/></td>
</tr>
<tr><td>&nbsp;</td>
<td><input type="submit" value="OK" /></td>
</tr>
</table>
</form>
</td>
</tr>
<tr><td align="center"><a href="http://www.mikrotik.com" target="_blank" style="border: none;"><img src="img/logobottom.png" alt="mikrotik" /></a></td></tr>
</table>

<br /><div style="color: #c1c1c1; font-size: 9px">Powered by MikroTik RouterOS</div>
$(if error)<br /><div style="color: #FF8080; font-size: 9px">$(error)</div>$(endif)
</td>
</tr>
</table>

<script type="text/javascript">
<!--
document.login.username.focus();
//-->
</script>
</body>
</html>

♦♦♦

3- otherlogin.html [standard login page for ALL]

otherlogin.html /  This is standard login.html page which is default mikrotik login page. You can use your old default login.html and rename it as to otherlogin.html

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">

<head>
<title>internet hotspot > login</title>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<meta http-equiv="pragma" content="no-cache" />
<meta http-equiv="expires" content="-1" />
<style type="text/css">
body {color: #737373; font-size: 10px; font-family: verdana;}

textarea,input,select {
background-color: #FDFBFB;
border: 1px solid #BBBBBB;
padding: 2px;
margin: 1px;
font-size: 14px;
color: #808080;
}

a, a:link, a:visited, a:active { color: #AAAAAA; text-decoration: none; font-size: 10px; }
a:hover { border-bottom: 1px dotted #c1c1c1; color: #AAAAAA; }
img {border: none;}
td { font-size: 14px; color: #7A7A7A; }
</style>

</head>

<html>
<body>

<div align="center">
<a href="$(link-login-only)?target=lv&amp;dst=$(link-orig-esc)">Latviski</a>
</div>
<div align="center">
<font size="4"><b>DESKTOP </b></font><b><font size="4">&nbsp;user</font></b></div>

<table width="100%" style="margin-top: 10%;">
<tr>
<td align="center" valign="middle">
<div style="color: #c1c1c1; font-size: 9px">Please log on to use the internet hotspot service<br />$(if trial == 'yes')Free trial available, <a style="color: #FF8080"href="$(link-login-only)?dst=$(link-orig-esc)&amp;username=T-$(mac-esc)">click here</a>.$(endif)</div><br />
<table width="280" height="280" style="border: 1px solid #cccccc; padding: 0px;" cellpadding="0" cellspacing="0">
<tr>
<td align="center" valign="bottom" height="175" colspan="2">
<form name="login" action="$(link-login-only)" method="post"
$(if chap-id) onSubmit="return doLogin()" $(endif)>
<input type="hidden" name="dst" value="$(link-orig)" />
<input type="hidden" name="popup" value="true" />

<table width="100" style="background-color: #ffffff">
<tr><td align="right">login</td>
<td><input style="width: 80px" name="username" type="text" value="$(username)"/></td>
</tr>
<tr><td align="right">password</td>
<td><input style="width: 80px" name="password" type="password"/></td>
</tr>
<tr><td>&nbsp;</td>
<td><input type="submit" value="OK" /></td>
</tr>
</table>
</form>
</td>
</tr>
<tr><td align="center"><a href="http://www.mikrotik.com" target="_blank" style="border: none;"><img src="img/logobottom.png" alt="mikrotik" /></a></td></tr>
</table>

<br /><div style="color: #c1c1c1; font-size: 9px">Powered by MikroTik RouterOS/zaib</div>
$(if error)<br /><div style="color: #FF8080; font-size: 9px">$(error)</div>$(endif)
</td>
</tr>
</table>

<script type="text/javascript">
<!--
document.login.username.focus();
//-->
</script>
</body>
</html>

After three files have been created, Upload them to MIKROTIK / Files > Hotspot folders.

TEST DRIVE

First, From your mobile device try to connect to web and you should see yout mobilelogin.html page ,
Something like below …

Mikrotik HOTSPOT Mobile User Login Page

Mikrotik HOTSPOT Mobile User Login Page

 

Now, try to login from DESKTOP PC, and you should see otherlogin.html page.

Mikrotik HOTSPOT Desktop User Login Page

Mikrotik HOTSPOT Desktop User Login Page

Regard’s
Syed Jahanzaib

 


Filed under: Uncategorized

WSUS Clients Getting Error Code 800b0001

$
0
0

In our company, we have a Windows 2003 base WSUS 3.0 with SP2 (Windows Update) server which is responsible to update all local clients and server base windows including 2000 / 2003 / 2008 / XP / W7  versions.

Recently we added four new IBM base servers with Windows 2008 R2 but unable to update, showing following error …

wsus-error-b0001

.

After doing few hours R&D, I found out that this is usually due to the WSUS Update Agent on the client being updated, but the WSUS server itself also needs to be upgraded to then allow communication with the newer agent. After installing SP2, you *MUST* also apply a later update “Update for Windows Server Update Services 3.0 SP2 (KB2720211

http://www.microsoft.com/en-us/download/details.aspx?id=29998

There was no need to reboot the server last time I ran this process, and the clients were able to communicate and obtain updates correctly.

.

Regard’s
Syed Jahanzaib


Filed under: Microsoft Related

Howto Cache Youtube with SQUID / LUSCA and bypass Cached Videos from Mikrotik Queue [April, 2014 , zaib]

$
0
0

LAST UPDATED: 22nd April, 2014 / 0800 hours pkst

Youtube caching is working as of 22nd april, 2014, tested and confirmed

[1st version > 11th January, 2011]

 What is LUSCA / SQUID ?

LUSCA is an advance version or Fork of  SQUID 2. The Lusca project aims to fix the shortcomings in the Squid-2. It also supports a variety of clustering protocols. By Using it, you can cache some dynamic contents that you previously can’t do with the squid.

For example [jz]
#  Video Cachingi.e Youtube / tube etc . . .
#  Windows / Linux Updates / Anti-virus , Anti-Malware i.e. Avira/ Avast / MBAM etc . . .
#  Well known sites i.e. facebook / google / yahoo etch. etch.
#  Download caching mp3′s/mpeg/avi etc . . .

Advantages of Youtube Caching   !!!

In most part of the world, bandwidth is very expensive, therefore it is (in some scenarios) very useful to Cache Youtube videos or any other flash videos, so if one of user downloads video / flash file , why again the same user or other user can’t download the same file from the CACHE, why he sucking the internet pipe for same content again n again?
Peoples on same LAN ,sometimes watch similar videos. If I put some youtube video link on on FACEBOOK, TWITTER or likewise , and all my friend will  watch that video and that particular video gets viewed many times in few hours. Usually the videos are shared over facebook or other social networking sites so the chances are high for multiple hits per popular videos for my lan users / friends / zaib.

This is the reason why I wrote this article. I have implemented Ubuntu with LUSCA/ Squid on it and its working great, but to achieve some results you need to have some TB of storage drives in your proxy machine.

Disadvantages of Youtube Caching   !!!

The chances, that another user will watch the same video, is really slim. if I search for something specific on youtube, i get more then hundreds of search results for same video. What is the chance that another user will search for the same thing, and will click on the same link / result? Youtube hosts more than 10 million videos. Which is too much to cache anyway. You need lot of space to cache videos. Also accordingly you will be needing ultra modern fast hardware with tons of RAM to handle such kind of cache giant. anyhow Try it

We will divide this article in following Sections

1#  Installing SQUID / LUSCA in UBUNTU
2#  Setting up SQUID / LUSCA Configuration files
3#  Performing some Tests, testing your Cache HIT
4# Using ZPH TOS to deliver cached contents to clients vai mikrotik at full LAN speed, Bypassing the User Queue for cached contents.

.

.

1#  Installing SQUID / LUSCA in UBUNTU

I assume your ubuntu box have 2 interfaces configured, one for LAN and second for WAN. You have internet sharing already configured. Now moving on to LUSCA / SQUID installation.

Here we go ….

Issue following command. Remember that its a one go command (which have multiple commands inside it so it may take a bit long to update, install and compile all required items)


apt-get update &&
apt-get install gcc -y &&
apt-get install build-essential -y &&
apt-get install libstdc++6 -y &&
apt-get install unzip -y &&
apt-get install bzip2 -y &&
apt-get install sharutils -y &&
apt-get install ccze -y &&
apt-get install libzip-dev -y &&
apt-get install automake1.9 -y &&
apt-get install acpid -y &&
apt-get install libfile-readbackwards-perl -y &&
apt-get install dnsmasq -y &&
cd /tmp &&
wget -c http://wifismartzone.com/files/linux_related/lusca/LUSCA_HEAD-r14942.tar.gz &&
tar -xvzf LUSCA_HEAD-r14942.tar.gz &&
cd /tmp/LUSCA_HEAD-r14942 &&
./configure \
--prefix=/usr \
--exec_prefix=/usr \
--bindir=/usr/sbin \
--sbindir=/usr/sbin \
--libexecdir=/usr/lib/squid \
--sysconfdir=/etc/squid \
--localstatedir=/var/spool/squid \
--datadir=/usr/share/squid \
--enable-async-io=24 \
--with-aufs-threads=24 \
--with-pthreads \
--enable-storeio=aufs \
--enable-linux-netfilter \
--enable-arp-acl \
--enable-epoll \
--enable-removal-policies=heap \
--with-aio \
--with-dl \
--enable-snmp \
--enable-delay-pools \
--enable-htcp \
--enable-cache-digests \
--disable-unlinkd \
--enable-large-cache-files \
--with-large-files \
--enable-err-languages=English \
--enable-default-err-language=English \
--enable-referer-log \
--with-maxfd=65536 &&
make &&
make install

EDIT SQUID.CONF FILE

Now edit SQUID.CONF file by using following command

nano /etc/squid/squid.conf

and Delete all previously lines , and paste the following lines.

Remember following squid.conf is not very neat and clean , you will find many un necessary junk entries in it, but as I didn’t had time to clean them all, so you may clean them as per your targets and goals.

Now Paste the following data … (in squid.conf)


#######################################################
## Squid_LUSCA configuration Starts from Here ...     #
## Thanks to Mr. Safatah [INDO] for sharing Configs   #
## Syed.Jahanzaib / 22nd April, 2014                  #
## http://aacable.wordpress.com / aacable@hotmail.com #
#######################################################

# HTTP Port for SQUID Service
http_port 8080 transparent
server_http11 on

# Cache Pee, for parent proxy if you ahve any, or ignore it.
#cache_peer x.x.x.x parent 8080 0

# Various Logs/files location
pid_filename /var/run/squid.pid
coredump_dir /var/spool/squid/
error_directory /usr/share/squid/errors/English
icon_directory /usr/share/squid/icons
mime_table /etc/squid/mime.conf
access_log daemon:/var/log/squid/access.log squid
cache_log none
#debug_options ALL,1 22,3 11,2 #84,9
referer_log /var/log/squid/referer.log
cache_store_log none
store_dir_select_algorithm  round-robin
logfile_daemon /usr/lib/squid/logfile-daemon
logfile_rotate 1

# Cache Policy
cache_mem 6 MB
maximum_object_size_in_memory 0 KB
memory_replacement_policy heap GDSF
cache_replacement_policy heap LFUDA

minimum_object_size 0 KB
maximum_object_size 10 GB
cache_swap_low 98
cache_swap_high 99

# Cache Folder Path, using 5GB for test
cache_dir aufs /cache-1 5000 16 256

# ACL Section
acl all src all
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl localnet src 10.0.0.0/8            # RFC1918 possible internal network
acl localnet src 172.16.0.0/12        # RFC1918 possible internal network
acl localnet src 192.168.0.0/16        # RFC1918 possible internal network
acl localnet src 125.165.92.1        # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80                # http
acl Safe_ports port 21                # ftp
acl Safe_ports port 443                # https
acl Safe_ports port 70                # gopher
acl Safe_ports port 210                # wais
acl Safe_ports port 1025-65535        # unregistered ports
acl Safe_ports port 280                # http-mgmt
acl Safe_ports port 488                # gss-http
acl Safe_ports port 591                # filemaker
acl Safe_ports port 777                # multiling http
acl CONNECT method CONNECT
acl purge method PURGE
acl snmppublic snmp_community public

acl range dstdomain .windowsupdate.com
range_offset_limit -1 KB range

#===========================================================================
#    Loading Patch
acl DENYCACHE urlpath_regex \.(ini|ui|lst|inf|pak|ver|patch|md5|cfg|lst|list|rsc|log|conf|dbd|db)$
acl DENYCACHE urlpath_regex (notice.html|afs.dat|dat.asp|patchinfo.xml|version.list|iepngfix.htc|updates.txt|patchlist.txt)
acl DENYCACHE urlpath_regex (pointblank.css|login_form.css|form.css|noupdate.ui|ahn.ui|3n.mh)$
acl DENYCACHE urlpath_regex (Loader|gamenotice|sources|captcha|notice|reset)
no_cache deny DENYCACHE

range_offset_limit 1 MB !DENYCACHE
uri_whitespace strip

#===========================================================================
#    Rules to block few Advertising sites
acl ads url_regex -i .youtube\.com\/ad_frame?
acl ads url_regex -i .(s|s[0-90-9])\.youtube\.com
acl ads url_regex -i .googlesyndication\.com
acl ads url_regex -i .doubleclick\.net
acl ads url_regex -i ^http:\/\/googleads\.*
acl ads url_regex -i ^http:\/\/(ad|ads|ads[0-90-9]|ads\d|kad|a[b|d]|ad\d|adserver|adsbox)\.[a-z0-9]*\.[a-z][a-z]*
acl ads url_regex -i ^http:\/\/openx\.[a-z0-9]*\.[a-z][a-z]*
acl ads url_regex -i ^http:\/\/[a-z0-9]*\.openx\.net\/
acl ads url_regex -i ^http:\/\/[a-z0-9]*\.u-ad\.info\/
acl ads url_regex -i ^http:\/\/adserver\.bs\/
acl ads url_regex -i !^http:\/\/adf\.ly
http_access deny ads
http_reply_access deny ads
#deny_info http://yoursite/yourad,htm ads
#==== End Rules: Advertising ====

strip_query_terms off

acl yutub url_regex -i .*youtube\.com\/.*$
acl yutub url_regex -i .*youtu\.be\/.*$
logformat squid1 %{Referer}>h %ru
access_log /var/log/squid/yt.log squid1 yutub

# ==== Custom Option REWRITE ====
acl store_rewrite_list urlpath_regex \/(get_video\?|videodownload\?|videoplayback.*id)

acl store_rewrite_list urlpath_regex \.(mp2|mp3|mid|midi|mp[234]|wav|ram|ra|rm|au|3gp|m4r|m4a)\?
acl store_rewrite_list urlpath_regex \.(mpg|mpeg|mp4|m4v|mov|avi|asf|wmv|wma|dat|flv|swf)\?
acl store_rewrite_list urlpath_regex \.(jpeg|jpg|jpe|jp2|gif|tiff?|pcx|png|bmp|pic|ico)\?
acl store_rewrite_list urlpath_regex \.(chm|dll|doc|docx|xls|xlsx|ppt|pptx|pps|ppsx|mdb|mdbx)\?
acl store_rewrite_list urlpath_regex \.(txt|conf|cfm|psd|wmf|emf|vsd|pdf|rtf|odt)\?
acl store_rewrite_list urlpath_regex \.(class|jar|exe|gz|bz|bz2|tar|tgz|zip|gzip|arj|ace|bin|cab|msi|rar)\?
acl store_rewrite_list urlpath_regex \.(htm|html|mhtml|css|js)\?

acl store_rewrite_list_web url_regex ^http:\/\/([A-Za-z-]+[0-9]+)*\.[A-Za-z]*\.[A-Za-z]*
acl store_rewrite_list_web_CDN url_regex ^http:\/\/[a-z]+[0-9]\.google\.com doubleclick\.net

acl store_rewrite_list_path urlpath_regex \.(mp2|mp3|mid|midi|mp[234]|wav|ram|ra|rm|au|3gp|m4r|m4a)$
acl store_rewrite_list_path urlpath_regex \.(mpg|mpeg|mp4|m4v|mov|avi|asf|wmv|wma|dat|flv|swf)$
acl store_rewrite_list_path urlpath_regex \.(jpeg|jpg|jpe|jp2|gif|tiff?|pcx|png|bmp|pic|ico)$
acl store_rewrite_list_path urlpath_regex \.(chm|dll|doc|docx|xls|xlsx|ppt|pptx|pps|ppsx|mdb|mdbx)$
acl store_rewrite_list_path urlpath_regex \.(txt|conf|cfm|psd|wmf|emf|vsd|pdf|rtf|odt)$
acl store_rewrite_list_path urlpath_regex \.(class|jar|exe|gz|bz|bz2|tar|tgz|zip|gzip|arj|ace|bin|cab|msi|rar)$
acl store_rewrite_list_path urlpath_regex \.(htm|html|mhtml|css|js)$

acl getmethod method GET

storeurl_access deny !getmethod
#this is not related to youtube video its only for CDN pictures
storeurl_access allow store_rewrite_list_web_CDN
storeurl_access allow store_rewrite_list_web store_rewrite_list_path
storeurl_access allow store_rewrite_list
storeurl_access deny all
storeurl_rewrite_program /etc/squid/storeurl.pl
storeurl_rewrite_children 10
storeurl_rewrite_concurrency 40
# ==== End Custom Option REWRITE ====

#===========================================================================
#    Custom Option REFRESH PATTERN
#===========================================================================
refresh_pattern (get_video\?|videoplayback\?|videodownload\?|\.flv\?|\.fid\?) 43200 99% 43200 override-expire ignore-reload ignore-must-revalidate ignore-private
refresh_pattern -i (get_video\?|videoplayback\?|videodownload\?) 5259487 999% 5259487 override-expire ignore-reload reload-into-ims ignore-no-cache ignore-private
# -- refresh pattern for specific sites -- #
refresh_pattern ^http://*.jobstreet.com.*/.* 720 100% 10080 override-expire override-lastmod ignore-no-cache
refresh_pattern ^http://*.indowebster.com.*/.* 720 100% 10080 override-expire override-lastmod reload-into-ims ignore-reload ignore-no-cache ignore-auth
refresh_pattern ^http://*.21cineplex.*/.* 720 100% 10080 override-expire override-lastmod reload-into-ims ignore-reload ignore-no-cache ignore-auth
refresh_pattern ^http://*.atmajaya.*/.* 720 100% 10080 override-expire ignore-no-cache ignore-auth
refresh_pattern ^http://*.kompas.*/.* 720 100% 10080 override-expire override-lastmod reload-into-ims ignore-no-cache ignore-auth
refresh_pattern ^http://*.theinquirer.*/.* 720 100% 10080 override-expire ignore-no-cache ignore-auth
refresh_pattern ^http://*.blogspot.com/.* 720 100% 10080 override-expire override-lastmod reload-into-ims ignore-no-cache ignore-auth
refresh_pattern ^http://*.wordpress.com/.* 720 100% 10080 override-expire override-lastmod reload-into-ims ignore-no-cache
refresh_pattern ^http://*.photobucket.com/.* 720 100% 10080 override-expire override-lastmod reload-into-ims ignore-no-cache ignore-auth
refresh_pattern ^http://*.tinypic.com/.* 720 100% 10080 override-expire override-lastmod reload-into-ims ignore-no-cache ignore-auth
refresh_pattern ^http://*.imageshack.us/.* 720 100% 10080 override-expire override-lastmod reload-into-ims ignore-no-cache ignore-auth
refresh_pattern ^http://*.kaskus.*/.* 720 100% 28800 override-expire override-lastmod reload-into-ims ignore-no-cache ignore-auth
refresh_pattern ^http://www.kaskus.com/.* 720 100% 28800 override-expire override-lastmod reload-into-ims ignore-no-cache ignore-auth
refresh_pattern ^http://*.detik.*/.* 720 50% 2880 override-expire override-lastmod reload-into-ims ignore-no-cache ignore-auth
refresh_pattern ^http://*.detiknews.*/*.* 720 50% 2880 override-expire override-lastmod reload-into-ims ignore-no-cache ignore-auth
refresh_pattern ^http://video.liputan6.com/.* 720 100% 10080 override-expire override-lastmod reload-into-ims ignore-no-cache ignore-auth
refresh_pattern ^http://static.liputan6.com/.* 720 100% 10080 override-expire override-lastmod reload-into-ims ignore-no-cache ignore-auth
refresh_pattern ^http://*.friendster.com/.* 720 100% 10080 override-expire override-lastmod ignore-no-cache ignore-auth
refresh_pattern ^http://*.facebook.com/.* 720 100% 10080 override-expire override-lastmod reload-into-ims ignore-no-cache ignore-auth
refresh_pattern ^http://apps.facebook.com/.* 720 100% 10080 override-expire override-lastmod reload-into-ims ignore-no-cache ignore-auth
refresh_pattern ^http://*.fbcdn.net/.* 720 100% 10080 override-expire override-lastmod reload-into-ims ignore-no-cache ignore-auth
refresh_pattern ^http://profile.ak.fbcdn.net/.* 720 100% 10080 override-expire override-lastmod reload-into-ims ignore-no-cache ignore-auth
refresh_pattern ^http://static.playspoon.com/.* 720 100% 10080 override-expire override-lastmod reload-into-ims ignore-no-cache ignore-auth
refresh_pattern ^http://cooking.game.playspoon.com/.* 720 100% 10080 override-expire override-lastmod reload-into-ims ignore-no-cache ignore-auth
refresh_pattern -i http://[^a-z\.]*onemanga\.com/? 720 80% 10080 override-expire override-lastmod reload-into-ims ignore-no-cache ignore-auth
refresh_pattern ^http://media?.onemanga.com/.* 720 80% 10080 override-expire override-lastmod reload-into-ims ignore-no-cache ignore-auth
refresh_pattern ^http://*.yahoo.com/.* 720 80% 10080 override-expire override-lastmod reload-into-ims ignore-no-cache ignore-auth
refresh_pattern ^http://*.google.com/.* 720 80% 10080 override-expire override-lastmod reload-into-ims ignore-no-cache ignore-auth
refresh_pattern ^http://*.forummikrotik.com/.* 720 80% 10080 override-expire override-lastmod reload-into-ims ignore-no-cache ignore-auth
refresh_pattern ^http://*.linux.or.id/.* 720 100% 10080 override-expire override-lastmod reload-into-ims ignore-no-cache ignore-auth
# -- refresh pattern for extension -- #
refresh_pattern -i \.(mp2|mp3|mid|midi|mp[234]|wav|ram|ra|rm|au|3gp|m4r|m4a)(\?.*|$) 5259487 999% 5259487 override-expire ignore-reload reload-into-ims ignore-no-cache ignore-private
refresh_pattern -i \.(mpg|mpeg|mp4|m4v|mov|avi|asf|wmv|wma|dat|flv|swf)(\?.*|$) 5259487 999% 5259487 override-expire ignore-reload reload-into-ims ignore-no-cache ignore-private
refresh_pattern -i \.(jpeg|jpg|jpe|jp2|gif|tiff?|pcx|png|bmp|pic|ico)(\?.*|$) 5259487 999% 5259487 override-expire ignore-reload reload-into-ims ignore-no-cache ignore-private
refresh_pattern -i \.(chm|dll|doc|docx|xls|xlsx|ppt|pptx|pps|ppsx|mdb|mdbx)(\?.*|$) 5259487 999% 5259487 override-expire ignore-reload reload-into-ims ignore-no-cache ignore-private
refresh_pattern -i \.(txt|conf|cfm|psd|wmf|emf|vsd|pdf|rtf|odt)(\?.*|$) 5259487 999% 5259487 override-expire ignore-reload reload-into-ims ignore-no-cache ignore-private
refresh_pattern -i \.(class|jar|exe|gz|bz|bz2|tar|tgz|zip|gzip|arj|ace|bin|cab|msi|rar)(\?.*|$) 5259487 999% 5259487 override-expire ignore-reload reload-into-ims ignore-no-cache ignore-private
refresh_pattern -i \.(htm|html|mhtml|css|js)(\?.*|$) 1440 90% 86400 override-expire ignore-reload reload-into-ims
#===========================================================================
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern ^ftp: 10080 95% 10080 override-lastmod reload-into-ims
refresh_pattern . 0 20% 10080 override-lastmod reload-into-ims

http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

http_access allow localnet
http_access allow all
http_access deny all

icp_access allow localnet
icp_access deny all
icp_port 0

buffered_logs on

acl shoutcast rep_header X-HTTP09-First-Line ^ICY.[0-9]
upgrade_http0.9 deny shoutcast

acl apache rep_header Server ^Apache
broken_vary_encoding allow apache

forwarded_for off
header_access From deny all
header_access Server deny all
header_access Link deny all
header_access Via deny all
header_access X-Forwarded-For deny all
httpd_suppress_version_string on

shutdown_lifetime 10 seconds

snmp_port 3401
snmp_access allow snmppublic all
dns_timeout 1 minutes

dns_nameservers 8.8.8.8 8.8.4.4

fqdncache_size 5000    #16384
ipcache_size 5000    #16384
ipcache_low 98
ipcache_high 99
log_fqdn off
log_icp_queries off
memory_pools off

maximum_single_addr_tries 2
retry_on_error on

icp_hit_stale on

strip_query_terms off

query_icmp on
reload_into_ims on
emulate_httpd_log off
negative_ttl 0 seconds
pipeline_prefetch on
vary_ignore_expire on
half_closed_clients off
high_page_fault_warning 2
nonhierarchical_direct on
prefer_direct off
cache_mgr aacable@hotmail.com
cache_effective_user proxy
cache_effective_group proxy
visible_hostname proxy.zaib
unique_hostname syed_jahanzaib
cachemgr_passwd none all
client_db on
max_filedescriptors 8192

# ZPH config Marking Cache Hit, so cached contents can be delivered at full lan speed via MT
zph_mode tos
zph_local 0x30
zph_parent 0
zph_option 136

.

.

SOTEURL.PL

Now We have to create an important file name storeurl.pl , which is very important and actually it does the
main job to redirect and pull video from cache.

touch /etc/squid/storeurl.pl
chmod +x /etc/squid/storeurl.pl
nano /etc/squid/storeurl.pl

Now paste the following lines, then Save and exit.

#!/usr/bin/perl
#######################################################
## Squid_LUSCA storeurl.pl starts from Here ...     #
## Thanks to Mr. Safatah [INDO] for sharing Configs  #
## Syed.Jahanzaib / 22nd April, 2014 #
## http://aacable.wordpress.com / aacable@hotmail.com #
#######################################################
$|=1;
while (<>) {
@X = split;
$x = $X[0] . " ";
##=================
## Encoding YOUTUBE
##=================
if ($X[1] =~ m/^http\:\/\/.*(youtube|google).*videoplayback.*/){
@itag = m/[&?](itag=[0-9]*)/;
@CPN = m/[&?]cpn\=([a-zA-Z0-9\-\_]*)/;
@IDS = m/[&?]id\=([a-zA-Z0-9\-\_]*)/;
$id = &GetID($CPN[0], $IDS[0]);
@range = m/[&?](range=[^\&\s]*)/;
print $x . "http://fathayu/" . $id . "&@itag@range\n";
} elsif ($X[1] =~ m/(youtube|google).*videoplayback\?/ ){
@itag = m/[&?](itag=[0-9]*)/;
@id = m/[&?](id=[^\&]*)/;
@redirect = m/[&?](redirect_counter=[^\&]*)/;
print $x . "http://fathayu/";
# ==========================================================================
#    VIMEO
# ==========================================================================
} elsif ($X[1] =~ m/^http:\/\/av\.vimeo\.com\/\d+\/\d+\/(.*)\?/) {
print $x . "http://fathayu/" . $1 . "\n";
} elsif ($X[1] =~ m/^http:\/\/pdl\.vimeocdn\.com\/\d+\/\d+\/(.*)\?/) {
print $x . "http://fathayu/" . $1 . "\n";
# ==========================================================================
#    DAILYMOTION
# ==========================================================================
} elsif ($X[1] =~ m/^http:\/\/proxy-[0-9]{1}\.dailymotion\.com\/(.*)\/(.*)\/video\/\d{3}\/\d{3}\/(.*.flv)/) {
print $x . "http://fathayu/" . $1 . "\n";
} elsif ($X[1] =~ m/^http:\/\/vid[0-9]\.ak\.dmcdn\.net\/(.*)\/(.*)\/video\/\d{3}\/\d{3}\/(.*.flv)/) {
print $x . "http://fathayu/" . $1 . "\n";
# ==========================================================================
#   YIMG
# ==========================================================================
} elsif ($X[1] =~ m/^http:\/\/(.*yimg.com)\/\/(.*)\/([^\/\?\&]*\/[^\/\?\&]*\.[^\/\?\&]{3,4})(\?.*)?$/) {
print $x . "http://fathayu/" . $3 . "\n";
# ==========================================================================
#   YIMG DOUBLE
# ==========================================================================
} elsif ($X[1] =~ m/^http:\/\/(.*?)\.yimg\.com\/(.*?)\.yimg\.com\/(.*?)\?(.*)/) {
print $x . "http://fathayu/" . $3 . "\n";
# ==========================================================================
#   YIMG WITH &sig=
# ==========================================================================
} elsif ($X[1] =~ m/^http:\/\/(.*?)\.yimg\.com\/(.*)/) {
@y = ($1,$2);
$y[0] =~ s/[a-z]+[0-9]+/cdn/;
$y[1] =~ s/&sig=.*//;
print $x . "http://fathayu/" . $y[0] . ".yimg.com/" . $y[1] . "\n";
# ==========================================================================
#    YTIMG
# ==========================================================================
} elsif ($X[1] =~ m/^http:\/\/i[1-4]\.ytimg\.com(.*)/) {
print $x . "http://fathayu/" . $1  . "\n";
# ==========================================================================
#   PORN Movies
# ==========================================================================
} elsif (($X[1] =~ /maxporn/) && (m/^http:\/\/([^\/]*?)\/(.*?)\/([^\/]*?)(\?.*)?$/)) {
print $x . "http://" . $1 . "/SQUIDINTERNAL/" . $3 . "\n";
#   Domain/path/.*/path/filename
} elsif (($X[1] =~ /fucktube/) && (m/^http:\/\/(.*?)(\.[^\.\-]*?[^\/]*\/[^\/]*)\/(.*)\/([^\/]*)\/([^\/\?\&]*)\.([^\/\?\&]{3,4})(\?.*?)$/)) {
@y = ($1,$2,$4,$5,$6);
$y[0] =~ s/(([a-zA-A]+[0-9]+(-[a-zA-Z])?$)|([^\.]*cdn[^\.]*)|([^\.]*cache[^\.]*))/cdn/;
print $x . "http://" . $y[0] . $y[1] . "/" . $y[2] . "/" . $y[3] . "." . $y[4] . "\n";
#   Like porn hub variables url and center part of the path, filename etention 3 or 4 with or without ? at the end
} elsif (($X[1] =~ /tube8|pornhub|xvideos/) && (m/^http:\/\/(([A-Za-z]+[0-9-.]+)*?(\.[a-z]*)?)\.([a-z]*[0-9]?\.[^\/]{3}\/[a-z]*)(.*?)((\/[a-z]*)?(\/[^\/]*){4}\.[^\/\?]{3,4})(\?.*)?$/)) {
print $x . "http://cdn." . $4 . $6 . "\n";
} elsif (($u =~ /tube8|redtube|hardcore-teen|pornhub|tubegalore|xvideos|hostedtube|pornotube|redtubefiles/) && (m/^http:\/\/(([A-Za-z]+[0-9-.]+)*?(\.[a-z]*)?)\.([a-z]*[0-9]?\.[^\/]{3}\/[a-z]*)(.*?)((\/[a-z]*)?(\/[^\/]*){4}\.[^\/\?]{3,4})(\?.*)?$/)) {
print $x . "http://cdn." . $4 . $6 . "\n";
#   acl store_rewrite_list url_regex -i \.xvideos\.com\/.*(3gp|mpg|flv|mp4)
#   refresh_pattern -i \.xvideos\.com\/.*(3gp|mpg|flv|mp4) 1440 99% 14400 override-expire override-lastmod ignore-no-cache ignore-private reload-into-ims ignore-must-revalidate ignore-reload store-stale
# ==========================================================================
} elsif ($X[1] =~ m/^http:\/\/.*\.xvideos\.com\/.*\/([\w\d\-\.\%]*\.(3gp|mpg|flv|mp4))\?.*/){
print $x . "http://fathayu/" . $1 . "\n";
} elsif ($X[1] =~ m/^http:\/\/[\d]+\.[\d]+\.[\d]+\.[\d]+\/.*\/xh.*\/([\w\d\-\.\%]*\.flv)/){
print $x . "http://fathayu/" . $1 . "\n";
} elsif ($X[1] =~ m/^http:\/\/[\d]+\.[\d]+\.[\d]+\.[\d]+.*\/([\w\d\-\.\%]*\.flv)\?start=0/){
print $x . "http://fathayu/" . $1 . "\n";
} elsif ($X[1] =~ m/^http:\/\/.*\.youjizz\.com.*\/([\w\d\-\.\%]*\.(mp4|flv|3gp))\?.*/){
print $x . "http://fathayu/" . $1 . "\n";
} elsif ($X[1] =~ m/^http:\/\/[\w\d\-\.\%]*\.keezmovies[\w\d\-\.\%]*\.com.*\/([\w\d\-\.\%]*\.(mp4|flv|3gp|mpg|wmv))\?.*/){
print $x . "http://fathayu/" . $1 . $2 . "\n";
} elsif ($X[1] =~ m/^http:\/\/[\w\d\-\.\%]*\.tube8[\w\d\-\.\%]*\.com.*\/([\w\d\-\.\%]*\.(mp4|flv|3gp|mpg|wmv))\?.*/) {
print $x . "http://fathayu/" . $1 . "\n";
} elsif ($X[1] =~ m/^http:\/\/[\w\d\-\.\%]*\.youporn[\w\d\-\.\%]*\.com.*\/([\w\d\-\.\%]*\.(mp4|flv|3gp|mpg|wmv))\?.*/){
print $x . "http://fathayu/" . $1 . "\n";
} elsif ($X[1] =~ m/^http:\/\/[\w\d\-\.\%]*\.spankwire[\w\d\-\.\%]*\.com.*\/([\w\d\-\.\%]*\.(mp4|flv|3gp|mpg|wmv))\?.*/) {
print $x . "http://fathayu/" . $1 . "\n";
} elsif ($X[1] =~ m/^http:\/\/[\w\d\-\.\%]*\.pornhub[\w\d\-\.\%]*\.com.*\/([[\w\d\-\.\%]*\.(mp4|flv|3gp|mpg|wmv))\?.*/){
print $x . "http://fathayu/" . $1 . "\n";
} elsif ($X[1] =~ m/^http:\/\/[\w\d\-\_\.\%\/]*.*\/([\w\d\-\_\.]+\.(flv|mp3|mp4|3gp|wmv))\?.*cdn\_hash.*/){
print $x . "http://fathayu/" . $1 . "\n";
} elsif (($X[1] =~ /maxporn/) && (m/^http:\/\/([^\/]*?)\/(.*?)\/([^\/]*?)(\?.*)?$/)) {
print $x . "http://fathayu/" . $1 . "/SQUIDINTERNAL/" . $3 . "\n";
} elsif (($X[1] =~ /fucktube/) && (m/^http:\/\/(.*?)(\.[^\.\-]*?[^\/]*\/[^\/]*)\/(.*)\/([^\/]*)\/([^\/\?\&]*)\.([^\/\?\&]{3,4})(\?.*?)$/)) {
@y = ($1,$2,$4,$5,$6);
$y[0] =~ s/(([a-zA-Z]+[0-9]+(-[a-zA-Z])?$)|([^\.]*cdn[^\.]*)|([^\.]*cache[^\.]*))/cdn/;
print $x . "http://fathayu/" . $y[0] . $y[1] . "/" . $y[2] . "/" . $y[3] . "." . $y[4] . "\n";
} elsif (($X[1] =~ /media[0-9]{1,5}\.youjizz/) && (m/^http:\/\/(.*?)(\.[^\.\-]*?\.[^\/]*)\/(.*)\/([^\/\?\&]*)\.([^\/\?\&]{3,4})(\?.*?)$/)) {
@y = ($1,$2,$4,$5);
$y[0] =~ s/(([a-zA-Z]+[0-9]+(-[a-zA-Z])?$)|([^\.]*cdn[^\.]*)|([^\.]*cache[^\.]*))/cdn/;
print $x . "http://fathayu/" . $y[0] . $y[1] . "/" . $y[2] . "." . $y[3] . "\n";
# ==========================================================================
#   Filehippo
# ==========================================================================
} elsif (($X[1] =~ /filehippo/) && (m/^http:\/\/(.*?)\.(.*?)\/(.*?)\/(.*)\.([a-z0-9]{3,4})(\?.*)?/)) {
@y = ($1,$2,$4,$5);
$y[0] =~ s/[a-z0-9]{2,5}/cdn./;
print $x . "http://fathayu/" . $y[0] . $y[1] . "/" . $y[2] . "." . $y[3] . "\n";
} elsif (($X[1] =~ /filehippo/) && (m/^http:\/\/(.*?)(\.[^\/]*?)\/(.*?)\/([^\?\&\=]*)\.([\w\d]{2,4})\??.*$/)) {
@y = ($1,$2,$4,$5);
$y[0] =~ s/([a-z][0-9][a-z]dlod[\d]{3})|((cache|cdn)[-\d]*)|([a-zA-Z]+-?[0-9]+(-[a-zA-Z]*)?)/cdn/;
print $x . "http://fathayu/" . $y[0] . $y[1] . "/" . $y[2] . "." . $y[3] . "\n";
} elsif ($X[1] =~ m/^http:\/\/.*filehippo\.com\/.*\/([\d\w\%\.\_\-]+\.(exe|zip|cab|msi|mru|mri|bz2|gzip|tgz|rar|pdf))/){
$y=$1;
for ($y) {
s/%20//g;
}
print $x . "http://fathayu//" . $y . "\n";
} elsif (($X[1] =~ /filehippo/) && (m/^http:\/\/(.*?)\.(.*?)\/(.*?)\/(.*)\.([a-z0-9]{3,4})(\?.*)?/)) {
@y = ($1,$2,$4,$5);
$y[0] =~ s/[a-z0-9]{2,5}/cdn./;
print $x . "http://fathayu/" . $y[0] . $y[1] . "/" . $y[2] . "." . $y[3] . "\n";
# ==========================================================================
#   4shared preview
# ==========================================================================
} elsif ($X[1] =~ m/^http:\/\/[a-z]{2}\d{3}\.4shared\.com\/img\/\d+\/\w+\/dlink__2Fdownload_2F.*_3Ftsid_(\w+)-\d+-\w+_26lgfp_3D1000_26sbsr_\w+\/preview.mp3/) {
print $x . "http://fathayu/" . $3 . "\n";

} else {
print $x . $X[1] . "\n";
}
}

sub GetID
{
$id = "";
use File::ReadBackwards;
my $lim = 200 ;
my $ref_log = File::ReadBackwards->new('/var/log/squid/referer.log');
while (defined($line = $ref_log->readline))
{
if ($line =~ m/.*youtube.*\/watch\?.*v=([a-zA-Z0-9\-\_]*).*\s.*id=$IDS[0].*/){
$id = $1;
last;
}
if ($line =~ m/.*youtube.*\/.*cpn=$CPN[0].*[&](video_id|docid|v)=([a-zA-Z0-9\-\_]*).*/){
$id = $2;
last;
}
if ($line =~ m/.*youtube.*\/.*[&?](video_id|docid|v)=([a-zA-Z0-9\-\_]*).*cpn=$CPN[0].*/){
$id = $2;
last;
}
last if --$lim <= 0;
}
if ($id eq ""){
$id = $IDS[0];
}
$ref_log->close();
return $id;
}
### STOREURL.PL ENDS HERE ###

INITIALIZE CACHE and LOG FOLDER …

Now create CACHE folder (here I used test in local drive)

# Log Folder and assign permissions
mkdir /var/log/squid
chown proxy:proxy /var/log/squid/

# Cache Folder
mkdir /cache-1
chown proxy:proxy /cache-1
#Now initialize cache dir by
squid -z

START SQUID SERVICE

Now start SQUID service by following command

squid 

and look for any error or termination it may get. If all ok, just press enter few times and you will be back to command prompt.
to verify if squid is running ok issue following command and look for squid instance, there should be 10+ instance for the squid process

ps aux |grep squid

 Something like below …

123

TIP:

To start SQUID Server in Debug mode, to check any erros, use following

squid -d1N

TEST time ….

It’s time to hit the ROAD and do some tests….

 

YOUTUBE TEST

Open Youtube and watch any Video. After complete download, Check the same video from another client. You will notice that it download very quickly (youtueb video is saved in chunks of 1.7mb each, so after completing first chunk, it will stop, if a user continue to watch the same video, it will serve second chunk and so on , you can watch the bar moving fast without using internet data.
As Shown in the example Below . . .

lusca_test.

YT cache HIT.

 

.

FILEHIPPO TEST [ZAIB]

FILHIPPO

As Shown in the example Below . . .

FILHIPPO

MUSIC DOWNLOAD TEST

Now test any music download. For example Go to
http://www.apniisp.com/songs/indian-movie-songs/ladies-vs-ricky-bahl/690/1.html
As Shown in the example Below . . .

and download any song , after its downloaded, goto 2nd client pc, and download the same song, and monitor the Squid access LOG. You will see cache hit TPC_HIT for this song.

As Shown in the example Below . . .

EXE / PROGRAM  DOWNLOAD TEST

Now test any .exe file download.
Goto http://www.rarlabs.com and download any package. After Download completes, goto 2nd client pc , and download the same file again. and monitor the Squid access LOG. You will see cache hit TPC_HIT for this file.

As Shown in the example Below . . .

SQUID LOGS

Other methods are as follow (I will update following (squid 2.7) article soon)

http://aacable.wordpress.com/2012/01/19/youtube-caching-with-squid-2-7-using-storeurl-pl/
http://aacable.wordpress.com/2012/08/13/youtube-caching-with-squid-nginx/

.

.

.

MIKROTIK with SQUID/ZPH: how to bypass Squid Cache HIT object with Queues Tree in RouterOS 5.x and 6.x

.

zph

.

Using Mikrotik, we can redirect HTTP traffic to SQUID proxy Server, We can also control user bandwidth, but its a good idea to deliver the already cached content to user at full lan speed, that’s why we setup cache server for, to save bandwidth and have fast browsing experience , right :p , So how can we do it in mikrotik that cache content should be delivered to users at unlimited speed, no queue on cache content. Here we go.

By using ZPH directives , we will mark cache content, so that it can later pick by Mikrotik.

Basic requirement is that Squid  must be running in transparent mode, can be done via iptables and squid.conf directives.
I am using UBUNTU squid 2.7 , (in ubuntu , apt-get install squid will install squid 2.7 by default which is gr8 for our work)
Add these lines in SQUID.CONF

#===============================================================================
#ZPH for SQUID 2.7 (Default in ubuntu 10.4) / Syed Jahanzaib aacable@hotmail.com
#===============================================================================
tcp_outgoing_tos 0x30 lanuser [lanuser is ACL for local network, change it to match your's]
zph_mode tos
zph_local 0x30
zph_parent 0
zph_option 136

Use following if you have squid 3.1.19


#======================================================
#ZPH for SQUID 3.1.19 (Default in ubuntu 12.4) / Syed Jahanzaib aacable@hotmail.com
#======================================================

# ZPH for Squid 3.1.19
qos_flows local-hit=0x30

That’s it for SQUID, Now moving on to Mikrotik box ,
Add following rules,

# Marking packets with DSCP (for MT 5.x) for cache hit content coming from SQUID Proxy

/ip firewall mangle add action=mark-packet chain=prerouting disabled=no dscp=12 new-packet-mark=proxy-hit passthrough=no comment="Mark Cache Hit Packets / aacable@hotmail.com"
/queue tree add burst-limit=0 burst-threshold=0 burst-time=0s disabled=no limit-at=0 max-limit=0 name=pmark packet-mark=proxy-hit parent=global-out priority=8 queue=default

# Marking packets with DSCP (for MT 6.x) for cache hit content coming from SQUID Proxy

/ip firewall mangle add action=mark-packet chain=prerouting comment="MARK_CACHE_HIT_FROM_PROXY_ZAIB" disabled=no dscp=12 new-packet-mark=proxy passthrough=no
/queue simple
add max-limit=100M/100M name="ZPH-Proxy Cache Hit Simple Queue / Syed Jahanzaib >aacable@hotmail.com" packet-marks=zph-hit priority=1/1 target="" total-priority=1

MAKE SURE YOU MOVE THE SIMPLE QUEUE ABOVE ALL OTHER QUEUES :D
.

Now every packet which is marked by SQUID CACHE_HIT, will be delivered to user at Full lan speed, rest of traffic will be restricted by user Queue.

TROUBLESHOOTING:

the above config is fully tested with UBUNTU SQUID 2.7 and FEDORA 10 with LUSCA

Make sure your squid is marking TOS for cache hit packets. You can check it via TCPDUMP

tcpdump -vni eth0 | grep ‘tos 0×30′

(eht0 = LAN connected interface)

Can you see something like ???

tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
20:25:07.961722 IP (tos 0×30, ttl 64, id 45167, offset 0, flags [DF], proto TCP (6), length 409)
20:25:07.962059 IP (tos 0×30, ttl 64, id 45168, offset 0, flags [DF], proto TCP (6), length 1480)
192 packets captured
195 packets received by filter
0 packets dropped by kernel
_________________________________

Regard’s
SYED JAHANZAIB

 

 

Regard’s
SYED JAHANZAIB
http://aacable.wordpress.com


Filed under: Linux Related, Mikrotik Related

Howto connect Squid Proxy with Mikrotik with Single Interface

$
0
0
This short reference guide was made on request by a creature called 'Humans' living on planet earth ;)

Scenario:

We want to connect Squid proxy server with mikrotik, and Squid server have only one interface.
Mikrotik is running PPPoE Server and have 3 interfaces as follows

MIKROTIK INTERFACE EXAMPLE:

MIKROTIK have 3 interfaces as follows…

LAN = 192.168.0.1/24
WAN = 1.1.1.1/24 (gw+dns pointing to wan link
proxy-interface = 192.168.2.1/24
PPPoE Users IP Pool = 172.16.0.1-172.16.0.255

 

SQUID  INTERFACE EXAMPLE:

SQUID proxy have only one interface as follows…

LAN (eth0) = 192.168.2.2/24
Gateway = 192.168.2.1
DNS = 192.168.2.2

.

As showed in the image below …

0-interface

.

To redirect traffic from the mikrotik to Squid proxy server, we have to create a redirect rule
As showed in the example below …

.

.

Mikrotik Configuration:

CLI Version:


/ip firewall nat

add action=dst-nat chain=dstnat comment="Redirect only PPPoE Users to Proxy Server 192.168.2.2" disabled=no dst-port=80 protocol=tcp src-address=172.16.0.1-172.16.0.255 to-addresses=192.168.2.2 to-ports=8080

add action=masquerade chain=srcnat comment="Default NAT rule for Internet Access" disabled=no

 Also showed in the image below …

1- redirect rule.

.

.

No IPTABLES configuration is required at squid end :D

.

Now try to browse from your client end, and you will see it in squid access.log
As showed in the image below …

2- squid logs with mt ip

 

DONE :)

.

.

.

TIPs and Tricks !

Just for info purposes …

How to view client original ip in squid logs instead of creepy mikrotik ip

As you have noticed that using above redirect method, client traffic is successfully routed (actually natted) to  Squid proxy server. But as you have noticed that squid proxy logs is showing Mikrotik IP only, so we have no idea which client is using proxy. To view client original ip address instead of mikrotik, you have to explicitly define the WAN interface in default NAT rule so that traffic send to Proxy interface should not be natted :)
Mikrotik Default NAT rule configuration
As showed in the image below …

3- client original ip

.

Now you can see its effect at squid logs
As showed in the image below …

4-CLIENT ORIGNIAL IP

.

.

Regard’s

☺☻♥
SYED JAHANZAIB
SKYPE – aacable79


Filed under: Linux Related

SQUID Proxy Server Monitoring with MRTG

$
0
0
This short reference guide was made on request by a creature called 'Humans' living on planet earth  ;)

This is a short reference guide to monitor SQUID various counters with the MRTG installed on Ubuntu. In this example I have configured SQUID and MRTG on same Ubuntu box.
Also I ahve showed howto install Apache, mrtg, scheduler to run after every 5 minutes, snmp services and its utilities, along with mibs.

If you have Freshly installed UBUNTU , You need to install Web Server (apache2)

apt-get install apache2

Now we will install MRTG

apt-get install mrtg

(Choose Yes to continue)

Now we will install SNMP Server and other SNMP utilities so that web can collect information for locahost and remote pcs via snmp.

apt-get install snmp snmpd

Now set your community string in /etc/snmp/snmpd.conf , Remove all Lines and add the following line only.

rocommunity public
syslocation "Karachi NOC, Paksitan"
syscontact  aacable@hotmail.com

Save and exit.

now edit /etc/default/snmpd

and change following


# snmpd options (use syslog, close stdin/out/err).
SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -I -smux -p /var/run/snmpd.pid'

To THIS:
# snmpd options (use syslog, close stdin/out/err).
#SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -I -smux -p /var/run/snmpd.pid '
SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -I -smux -p /var/run/snmpd.pid -c /etc/snmp/snmpd.conf'\

and restart snmpd

/etc/init.d/snmpd restart
OR
service snmpd restart

Now install MIB down loader,


sudo apt-get install snmp-mibs-downloader

Now copy all mibs in any single folder like /cfg/mibs/


mkdir /cfg
mkdir /cfg/mibs
cp /var/lib/mibs/ietf/*  /cfg/mibs
cd /cfg/mibs
wget http://wifismartzone.com/files/linux_related/squid.mib

MIBS are required if you want to use OID names instead of numeric values :D This was the issue for which I was stuck for many hours :(

Format that will be used in cfg file
E.g:

LoadMIBs: /cfg/mibs/squid.mib

Testing SNMP Service for localhost.

Now snmp service have been installed, its better to do a snmpwalk test from localhost or another remote host to verify our new configuration is responding correctly. issue the following command from localhost terminal.

snmpwalk -v 1 -c public 127.0.0.1


and you will see lot of oids and information which confirms that snmp service is installed and responding OK.
As showed in the image below …

proxy -2.

.

.

.

Adding MRTG to crontab to run after very 5 minutes

To add the schduler job, first edit crontab file

crontab -e
(if it asks for prefered text editor, go with nano, its much easier)

now add following line


*/5 * * * * env LANG=C mrtg /etc/mrtg.cfg --logging /var/log/mrtg.log

Some tips for INDEX MAKER and running MRTG manually …

Following is the command to create CFG file for remote pc.

cfgmaker public@192.168.2.1 > /cfg/proxy.cfg

Following is the command to check remote pc snmp info

snmpwalk -v 1 -c public /cfg/192.168.2.1

Following is the command to create index page for your cfg file.

indexmaker /etc/mrtg.cfg –output /var/www/mrtg/index.html –columns=1 -compact

Following is the command to stat MRTG to create your graph file manual. You haev to run this file after every 5 minutes in order to create graphs.

env LANG=C mrtg /etc/mrtg.cfg

.

.

.

Now LETS start with SQUID config…

 

SQUID CONFIGURATION FOR SNMP

Edit your squid.conf and add the following


acl snmppublic snmp_community public
snmp_port 3401
snmp_access allow snmppublic all

SAVE and EXIT.

Now use following proxy.cfg for the squid graphs

.

.

proxy.cfg


LoadMIBs: /cfg/mibs/squid.mib

Options[_]: growright,nobanner,logscale,pngdate,bits
Options[^]: growright,nobanner,logscale,pngdate,bits
WorkDir: /var/www/mrtg
EnableIPv6: no

### Interface 2 >> Descr: 'eth0' | Name: 'eth0' | Ip: '10.0.0.1' | Eth: '00-0c-29-2b-95-78' ###

Target[localhost_eth0]: #eth0:public@localhost:
SetEnv[localhost_eth0]: MRTG_INT_IP="10.0.0.1" MRTG_INT_DESCR="eth0"
MaxBytes[localhost_eth0]: 1250000
Title[localhost_eth0]: Traffic Analysis for eth0 -- ubuntu
PageTop[localhost_eth0]: <h1>Traffic Analysis for eth0 -- ubuntu</h1>
<div id="sysdetails">
<table>
<tr>
<td>System:</td>
<td>ubuntu in "Karachi NOC, Paksitan"</td>
</tr>
<tr>
<td>Maintainer:</td>
<td>aacable@hotmail.com</td>
</tr>
<tr>
<td>Description:</td>
<td>eth0  </td>
</tr>
<tr>
<td>ifType:</td>
<td>ethernetCsmacd (6)</td>
</tr>
<tr>
<td>ifName:</td>
<td>eth0</td>
</tr>
<tr>
<td>Max Speed:</td>
<td>1250.0 kBytes/s</td>
</tr>
<tr>
<td>Ip:</td>
<td>10.0.0.1 (ubuntu.local)</td>
</tr>
</table>
</div>

LoadMIBs: /cfg/mibs/squid.mib

PageFoot[^]: <i>Page managed by <a href="mailto:aacable@hotmail.com">Syed Jahanzaib</a></i>

Target[cacheServerRequests]: cacheServerRequests&cacheServerRequests:public@localhost:3401
MaxBytes[cacheServerRequests]: 10000000
Title[cacheServerRequests]: Server Requests @ zaib_squid_proxy_server
Options[cacheServerRequests]: growright, nopercent
PageTop[cacheServerRequests]: <h1>Server Requests @ zaib_squid_proxy_server</h1>
YLegend[cacheServerRequests]: requests/sec
ShortLegend[cacheServerRequests]: req/s
LegendI[cacheServerRequests]: Requests&nbsp;
LegendO[cacheServerRequests]:
Legend1[cacheServerRequests]: Requests
Legend2[cacheServerRequests]:

Target[cacheServerErrors]: cacheServerErrors&cacheServerErrors:public@localhost:3401
MaxBytes[cacheServerErrors]: 10000000
Title[cacheServerErrors]: Server Errors @ zaib_squid_proxy_server
Options[cacheServerErrors]: growright, nopercent
PageTop[cacheServerErrors]: <h1>Server Errors @ zaib_squid_proxy_server</h1>
YLegend[cacheServerErrors]: errors/sec
ShortLegend[cacheServerErrors]: err/s
LegendI[cacheServerErrors]: Errors&nbsp;
LegendO[cacheServerErrors]:
Legend1[cacheServerErrors]: Errors
Legend2[cacheServerErrors]:

Target[cacheServerInOutKb]: cacheServerInKb&cacheServerOutKb:public@localhost:3401 * 1024
MaxBytes[cacheServerInOutKb]: 1000000000
Title[cacheServerInOutKb]: Server In/Out Traffic @ zaib_squid_proxy_server
Options[cacheServerInOutKb]: growright, nopercent
PageTop[cacheServerInOutKb]: <h1>Server In/Out Traffic @ zaib_squid_proxy_server</h1>
YLegend[cacheServerInOutKb]: Bytes/sec
ShortLegend[cacheServerInOutKb]: Bytes/s
LegendI[cacheServerInOutKb]: Server In&nbsp;
LegendO[cacheServerInOutKb]: Server Out&nbsp;
Legend1[cacheServerInOutKb]: Server In
Legend2[cacheServerInOutKb]: Server Out

Target[cacheHttpHits]: cacheHttpHits&cacheHttpHits:public@localhost:3401
MaxBytes[cacheHttpHits]: 10000000
Title[cacheHttpHits]: HTTP Hits @ zaib_squid_proxy_server
Options[cacheHttpHits]: growright, nopercent
PageTop[cacheHttpHits]: <h1>HTTP Hits @ zaib_squid_proxy_server</h1>
YLegend[cacheHttpHits]: hits/sec
ShortLegend[cacheHttpHits]: hits/s
LegendI[cacheHttpHits]: Hits&nbsp;
LegendO[cacheHttpHits]:
Legend1[cacheHttpHits]: Hits
Legend2[cacheHttpHits]:

Target[cacheHttpErrors]: cacheHttpErrors&cacheHttpErrors:public@localhost:3401
MaxBytes[cacheHttpErrors]: 10000000
Title[cacheHttpErrors]: HTTP Errors @ zaib_squid_proxy_server
Options[cacheHttpErrors]: growright, nopercent
PageTop[cacheHttpErrors]: <h1>HTTP Errors @ zaib_squid_proxy_server</h1>
YLegend[cacheHttpErrors]: errors/sec
ShortLegend[cacheHttpErrors]: err/s
LegendI[cacheHttpErrors]: Errors&nbsp;
LegendO[cacheHttpErrors]:
Legend1[cacheHttpErrors]: Errors
Legend2[cacheHttpErrors]:

Target[cacheIcpPktsSentRecv]: cacheIcpPktsSent&cacheIcpPktsRecv:public@localhost:3401
MaxBytes[cacheIcpPktsSentRecv]: 10000000
Title[cacheIcpPktsSentRecv]: ICP Packets Sent/Received
Options[cacheIcpPktsSentRecv]: growright, nopercent
PageTop[cacheIcpPktsSentRecv]: <h1>ICP Packets Sent/Recieved @ zaib_squid_proxy_server</h1>
YLegend[cacheIcpPktsSentRecv]: packets/sec
ShortLegend[cacheIcpPktsSentRecv]: pkts/s
LegendI[cacheIcpPktsSentRecv]: Pkts Sent&nbsp;
LegendO[cacheIcpPktsSentRecv]: Pkts Received&nbsp;
Legend1[cacheIcpPktsSentRecv]: Pkts Sent
Legend2[cacheIcpPktsSentRecv]: Pkts Received

Target[cacheIcpKbSentRecv]: cacheIcpKbSent&cacheIcpKbRecv:public@localhost:3401 * 1024
MaxBytes[cacheIcpKbSentRecv]: 1000000000
Title[cacheIcpKbSentRecv]: ICP Bytes Sent/Received
Options[cacheIcpKbSentRecv]: growright, nopercent
PageTop[cacheIcpKbSentRecv]: <h1>ICP Bytes Sent/Received @ zaib_squid_proxy_server</h1>
YLegend[cacheIcpKbSentRecv]: Bytes/sec
ShortLegend[cacheIcpKbSentRecv]: Bytes/s
LegendI[cacheIcpKbSentRecv]: Sent&nbsp;
LegendO[cacheIcpKbSentRecv]: Received&nbsp;
Legend1[cacheIcpKbSentRecv]: Sent
Legend2[cacheIcpKbSentRecv]: Received

Target[cacheHttpInOutKb]: cacheHttpInKb&cacheHttpOutKb:public@localhost:3401 * 1024
MaxBytes[cacheHttpInOutKb]: 1000000000
Title[cacheHttpInOutKb]: HTTP In/Out Traffic @ zaib_squid_proxy_server
Options[cacheHttpInOutKb]: growright, nopercent
PageTop[cacheHttpInOutKb]: <h1>HTTP In/Out Traffic @ zaib_squid_proxy_server</h1>
YLegend[cacheHttpInOutKb]: Bytes/second
ShortLegend[cacheHttpInOutKb]: Bytes/s
LegendI[cacheHttpInOutKb]: HTTP In&nbsp;
LegendO[cacheHttpInOutKb]: HTTP Out&nbsp;
Legend1[cacheHttpInOutKb]: HTTP In
Legend2[cacheHttpInOutKb]: HTTP Out

Target[cacheCurrentSwapSize]: cacheCurrentSwapSize&cacheCurrentSwapSize:public@localhost:3401
MaxBytes[cacheCurrentSwapSize]: 1000000000
Title[cacheCurrentSwapSize]: Current Swap Size @ zaib_squid_proxy_server
Options[cacheCurrentSwapSize]: gauge, growright, nopercent
PageTop[cacheCurrentSwapSize]: <h1>Current Swap Size @ zaib_squid_proxy_server</h1>
YLegend[cacheCurrentSwapSize]: swap size
ShortLegend[cacheCurrentSwapSize]: Bytes
LegendI[cacheCurrentSwapSize]: Swap Size&nbsp;
LegendO[cacheCurrentSwapSize]:
Legend1[cacheCurrentSwapSize]: Swap Size
Legend2[cacheCurrentSwapSize]:

Target[cacheNumObjCount]: cacheNumObjCount&cacheNumObjCount:public@localhost:3401
MaxBytes[cacheNumObjCount]: 10000000
Title[cacheNumObjCount]: Num Object Count @ zaib_squid_proxy_server
Options[cacheNumObjCount]: gauge, growright, nopercent
PageTop[cacheNumObjCount]: <h1>Num Object Count @ zaib_squid_proxy_server</h1>
YLegend[cacheNumObjCount]: # of objects
ShortLegend[cacheNumObjCount]: objects
LegendI[cacheNumObjCount]: Num Objects&nbsp;
LegendO[cacheNumObjCount]:
Legend1[cacheNumObjCount]: Num Objects
Legend2[cacheNumObjCount]:

Target[cacheCpuUsage]: cacheCpuUsage&cacheCpuUsage:public@localhost:3401
MaxBytes[cacheCpuUsage]: 100
AbsMax[cacheCpuUsage]: 100
Title[cacheCpuUsage]: CPU Usage @ zaib_squid_proxy_server
Options[cacheCpuUsage]: absolute, gauge, noinfo, growright, nopercent
Unscaled[cacheCpuUsage]: dwmy
PageTop[cacheCpuUsage]: <h1>CPU Usage @ zaib_squid_proxy_server</h1>
YLegend[cacheCpuUsage]: usage %
ShortLegend[cacheCpuUsage]:%
LegendI[cacheCpuUsage]: CPU Usage&nbsp;
LegendO[cacheCpuUsage]:
Legend1[cacheCpuUsage]: CPU Usage
Legend2[cacheCpuUsage]:

Target[cacheMemUsage]: cacheMemUsage&cacheMemUsage:public@localhost:3401 * 1024
MaxBytes[cacheMemUsage]: 2000000000
Title[cacheMemUsage]: Memory Usage
Options[cacheMemUsage]: gauge, growright, nopercent
PageTop[cacheMemUsage]: <h1>Total memory accounted for @ zaib_squid_proxy_server</h1>
YLegend[cacheMemUsage]: Bytes
ShortLegend[cacheMemUsage]: Bytes
LegendI[cacheMemUsage]: Mem Usage&nbsp;
LegendO[cacheMemUsage]:
Legend1[cacheMemUsage]: Mem Usage
Legend2[cacheMemUsage]:

Target[cacheSysPageFaults]: cacheSysPageFaults&cacheSysPageFaults:public@localhost:3401
MaxBytes[cacheSysPageFaults]: 10000000
Title[cacheSysPageFaults]: Sys Page Faults @ zaib_squid_proxy_server
Options[cacheSysPageFaults]: growright, nopercent
PageTop[cacheSysPageFaults]: <h1>Sys Page Faults @ zaib_squid_proxy_server</h1>
YLegend[cacheSysPageFaults]: page faults/sec
ShortLegend[cacheSysPageFaults]: PF/s
LegendI[cacheSysPageFaults]: Page Faults&nbsp;
LegendO[cacheSysPageFaults]:
Legend1[cacheSysPageFaults]: Page Faults
Legend2[cacheSysPageFaults]:

Target[cacheSysVMsize]: cacheSysVMsize&cacheSysVMsize:public@localhost:3401 * 1024
MaxBytes[cacheSysVMsize]: 1000000000
Title[cacheSysVMsize]: Storage Mem Size @ zaib_squid_proxy_server
Options[cacheSysVMsize]: gauge, growright, nopercent
PageTop[cacheSysVMsize]: <h1>Storage Mem Size @ zaib_squid_proxy_server</h1>
YLegend[cacheSysVMsize]: mem size
ShortLegend[cacheSysVMsize]: Bytes
LegendI[cacheSysVMsize]: Mem Size&nbsp;
LegendO[cacheSysVMsize]:
Legend1[cacheSysVMsize]: Mem Size
Legend2[cacheSysVMsize]:

Target[cacheSysStorage]: cacheSysStorage&cacheSysStorage:public@localhost:3401
MaxBytes[cacheSysStorage]: 1000000000
Title[cacheSysStorage]: Storage Swap Size @ zaib_squid_proxy_server
Options[cacheSysStorage]: gauge, growright, nopercent
PageTop[cacheSysStorage]: <h1>Storage Swap Size @ zaib_squid_proxy_server</h1>
YLegend[cacheSysStorage]: swap size (KB)
ShortLegend[cacheSysStorage]: KBytes
LegendI[cacheSysStorage]: Swap Size&nbsp;
LegendO[cacheSysStorage]:
Legend1[cacheSysStorage]: Swap Size
Legend2[cacheSysStorage]:

Target[cacheSysNumReads]: cacheSysNumReads&cacheSysNumReads:public@localhost:3401
MaxBytes[cacheSysNumReads]: 10000000
Title[cacheSysNumReads]: HTTP I/O number of reads @ zaib_squid_proxy_server
Options[cacheSysNumReads]: growright, nopercent
PageTop[cacheSysNumReads]: <h1>HTTP I/O number of reads @ zaib_squid_proxy_server</h1>
YLegend[cacheSysNumReads]: reads/sec
ShortLegend[cacheSysNumReads]: reads/s
LegendI[cacheSysNumReads]: I/O&nbsp;
LegendO[cacheSysNumReads]:
Legend1[cacheSysNumReads]: I/O
Legend2[cacheSysNumReads]:

Target[cacheCpuTime]: cacheCpuTime&cacheCpuTime:public@localhost:3401
MaxBytes[cacheCpuTime]: 1000000000
Title[cacheCpuTime]: Cpu Time
Options[cacheCpuTime]: gauge, growright, nopercent
PageTop[cacheCpuTime]: <h1>Amount of cpu seconds consumed @ zaib_squid_proxy_server</h1>
YLegend[cacheCpuTime]: cpu seconds
ShortLegend[cacheCpuTime]: cpu seconds
LegendI[cacheCpuTime]: Mem Time&nbsp;
LegendO[cacheCpuTime]:
Legend1[cacheCpuTime]: Mem Time
Legend2[cacheCpuTime]:

Target[cacheMaxResSize]: cacheMaxResSize&cacheMaxResSize:public@localhost:3401 * 1024
MaxBytes[cacheMaxResSize]: 1000000000
Title[cacheMaxResSize]: Max Resident Size
Options[cacheMaxResSize]: gauge, growright, nopercent
PageTop[cacheMaxResSize]: <h1>Maximum Resident Size @ zaib_squid_proxy_server</h1>
YLegend[cacheMaxResSize]: Bytes
ShortLegend[cacheMaxResSize]: Bytes
LegendI[cacheMaxResSize]: Size&nbsp;
LegendO[cacheMaxResSize]:
Legend1[cacheMaxResSize]: Size
Legend2[cacheMaxResSize]:

Target[cacheCurrentUnlinkRequests]: cacheCurrentUnlinkRequests&cacheCurrentUnlinkRequests:public@localhost:3401
MaxBytes[cacheCurrentUnlinkRequests]: 1000000000
Title[cacheCurrentUnlinkRequests]: Unlinkd Requests
Options[cacheCurrentUnlinkRequests]: growright, nopercent
PageTop[cacheCurrentUnlinkRequests]: <h1>Requests given to unlinkd @ zaib_squid_proxy_server</h1>
YLegend[cacheCurrentUnlinkRequests]: requests/sec
ShortLegend[cacheCurrentUnlinkRequests]: reqs/s
LegendI[cacheCurrentUnlinkRequests]: Unlinkd requests&nbsp;
LegendO[cacheCurrentUnlinkRequests]:
Legend1[cacheCurrentUnlinkRequests]: Unlinkd requests
Legend2[cacheCurrentUnlinkRequests]:

Target[cacheCurrentUnusedFileDescrCount]: cacheCurrentUnusedFileDescrCount&cacheCurrentUnusedFileDescrCount:public@localhost:3401
MaxBytes[cacheCurrentUnusedFileDescrCount]: 1000000000
Title[cacheCurrentUnusedFileDescrCount]: Available File Descriptors
Options[cacheCurrentUnusedFileDescrCount]: gauge, growright, nopercent
PageTop[cacheCurrentUnusedFileDescrCount]: <h1>Available number of file descriptors @ zaib_squid_proxy_server</h1>
YLegend[cacheCurrentUnusedFileDescrCount]: # of FDs
ShortLegend[cacheCurrentUnusedFileDescrCount]: FDs
LegendI[cacheCurrentUnusedFileDescrCount]: File Descriptors&nbsp;
LegendO[cacheCurrentUnusedFileDescrCount]:
Legend1[cacheCurrentUnusedFileDescrCount]: File Descriptors
Legend2[cacheCurrentUnusedFileDescrCount]:

Target[cacheCurrentReservedFileDescrCount]: cacheCurrentReservedFileDescrCount&cacheCurrentReservedFileDescrCount:public@localhost:3401
MaxBytes[cacheCurrentReservedFileDescrCount]: 1000000000
Title[cacheCurrentReservedFileDescrCount]: Reserved File Descriptors
Options[cacheCurrentReservedFileDescrCount]: gauge, growright, nopercent
PageTop[cacheCurrentReservedFileDescrCount]: <h1>Reserved number of file descriptors @ zaib_squid_proxy_server</h1>
YLegend[cacheCurrentReservedFileDescrCount]: # of FDs
ShortLegend[cacheCurrentReservedFileDescrCount]: FDs
LegendI[cacheCurrentReservedFileDescrCount]: File Descriptors&nbsp;
LegendO[cacheCurrentReservedFileDescrCount]:
Legend1[cacheCurrentReservedFileDescrCount]: File Descriptors
Legend2[cacheCurrentReservedFileDescrCount]:

Target[cacheClients]: cacheClients&cacheClients:public@localhost:3401
MaxBytes[cacheClients]: 1000000000
Title[cacheClients]: Number of Clients
Options[cacheClients]: gauge, growright, nopercent
PageTop[cacheClients]: <h1>Number of clients accessing cache @ zaib_squid_proxy_server</h1>
YLegend[cacheClients]: clients/sec
ShortLegend[cacheClients]: clients/s
LegendI[cacheClients]: Num Clients&nbsp;
LegendO[cacheClients]:
Legend1[cacheClients]: Num Clients
Legend2[cacheClients]:

Target[cacheHttpAllSvcTime]: cacheHttpAllSvcTime.5&cacheHttpAllSvcTime.60:public@localhost:3401
MaxBytes[cacheHttpAllSvcTime]: 1000000000
Title[cacheHttpAllSvcTime]: HTTP All Service Time
Options[cacheHttpAllSvcTime]: gauge, growright, nopercent
PageTop[cacheHttpAllSvcTime]: <h1>HTTP all service time @ zaib_squid_proxy_server</h1>
YLegend[cacheHttpAllSvcTime]: svc time (ms)
ShortLegend[cacheHttpAllSvcTime]: ms
LegendI[cacheHttpAllSvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheHttpAllSvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheHttpAllSvcTime]: Median Svc Time
Legend2[cacheHttpAllSvcTime]: Median Svc Time

Target[cacheHttpMissSvcTime]: cacheHttpMissSvcTime.5&cacheHttpMissSvcTime.60:public@localhost:3401
MaxBytes[cacheHttpMissSvcTime]: 1000000000
Title[cacheHttpMissSvcTime]: HTTP Miss Service Time
Options[cacheHttpMissSvcTime]: gauge, growright, nopercent
PageTop[cacheHttpMissSvcTime]: <h1>HTTP miss service time @ zaib_squid_proxy_server</h1>
YLegend[cacheHttpMissSvcTime]: svc time (ms)
ShortLegend[cacheHttpMissSvcTime]: ms
LegendI[cacheHttpMissSvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheHttpMissSvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheHttpMissSvcTime]: Median Svc Time
Legend2[cacheHttpMissSvcTime]: Median Svc Time

Target[cacheHttpNmSvcTime]: cacheHttpNmSvcTime.5&cacheHttpNmSvcTime.60:public@localhost:3401
MaxBytes[cacheHttpNmSvcTime]: 1000000000
Title[cacheHttpNmSvcTime]: HTTP Near Miss Service Time
Options[cacheHttpNmSvcTime]: gauge, growright, nopercent
PageTop[cacheHttpNmSvcTime]: <h1>HTTP near miss service time @ zaib_squid_proxy_server</h1>
YLegend[cacheHttpNmSvcTime]: svc time (ms)
ShortLegend[cacheHttpNmSvcTime]: ms
LegendI[cacheHttpNmSvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheHttpNmSvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheHttpNmSvcTime]: Median Svc Time
Legend2[cacheHttpNmSvcTime]: Median Svc Time

Target[cacheHttpHitSvcTime]: cacheHttpHitSvcTime.5&cacheHttpHitSvcTime.60:public@localhost:3401
MaxBytes[cacheHttpHitSvcTime]: 1000000000
Title[cacheHttpHitSvcTime]: HTTP Hit Service Time
Options[cacheHttpHitSvcTime]: gauge, growright, nopercent
PageTop[cacheHttpHitSvcTime]: <h1>HTTP hit service time @ zaib_squid_proxy_server</h1>
YLegend[cacheHttpHitSvcTime]: svc time (ms)
ShortLegend[cacheHttpHitSvcTime]: ms
LegendI[cacheHttpHitSvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheHttpHitSvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheHttpHitSvcTime]: Median Svc Time
Legend2[cacheHttpHitSvcTime]: Median Svc Time

Target[cacheIcpQuerySvcTime]: cacheIcpQuerySvcTime.5&cacheIcpQuerySvcTime.60:public@localhost:3401
MaxBytes[cacheIcpQuerySvcTime]: 1000000000
Title[cacheIcpQuerySvcTime]: ICP Query Service Time
Options[cacheIcpQuerySvcTime]: gauge, growright, nopercent
PageTop[cacheIcpQuerySvcTime]: <h1>ICP query service time @ zaib_squid_proxy_server</h1>
YLegend[cacheIcpQuerySvcTime]: svc time (ms)
ShortLegend[cacheIcpQuerySvcTime]: ms
LegendI[cacheIcpQuerySvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheIcpQuerySvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheIcpQuerySvcTime]: Median Svc Time
Legend2[cacheIcpQuerySvcTime]: Median Svc Time

Target[cacheIcpReplySvcTime]: cacheIcpReplySvcTime.5&cacheIcpReplySvcTime.60:public@localhost:3401
MaxBytes[cacheIcpReplySvcTime]: 1000000000
Title[cacheIcpReplySvcTime]: ICP Reply Service Time
Options[cacheIcpReplySvcTime]: gauge, growright, nopercent
PageTop[cacheIcpReplySvcTime]: <h1>ICP reply service time @ zaib_squid_proxy_server</h1>
YLegend[cacheIcpReplySvcTime]: svc time (ms)
ShortLegend[cacheIcpReplySvcTime]: ms
LegendI[cacheIcpReplySvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheIcpReplySvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheIcpReplySvcTime]: Median Svc Time
Legend2[cacheIcpReplySvcTime]: Median Svc Time

Target[cacheDnsSvcTime]: cacheDnsSvcTime.5&cacheDnsSvcTime.60:public@localhost:3401
MaxBytes[cacheDnsSvcTime]: 1000000000
Title[cacheDnsSvcTime]: DNS Service Time
Options[cacheDnsSvcTime]: gauge, growright, nopercent
PageTop[cacheDnsSvcTime]: <h1>DNS service time @ zaib_squid_proxy_server</h1>
YLegend[cacheDnsSvcTime]: svc time (ms)
ShortLegend[cacheDnsSvcTime]: ms
LegendI[cacheDnsSvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheDnsSvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheDnsSvcTime]: Median Svc Time
Legend2[cacheDnsSvcTime]: Median Svc Time

Target[cacheRequestHitRatio]: cacheRequestHitRatio.5&cacheRequestHitRatio.60:public@localhost:3401
MaxBytes[cacheRequestHitRatio]: 100
AbsMax[cacheRequestHitRatio]: 100
Title[cacheRequestHitRatio]: Request Hit Ratio @ zaib_squid_proxy_server
Options[cacheRequestHitRatio]: absolute, gauge, noinfo, growright, nopercent
Unscaled[cacheRequestHitRatio]: dwmy
PageTop[cacheRequestHitRatio]: <h1>Request Hit Ratio @ zaib_squid_proxy_server</h1>
YLegend[cacheRequestHitRatio]: %
ShortLegend[cacheRequestHitRatio]: %
LegendI[cacheRequestHitRatio]: Median Hit Ratio (5min)&nbsp;
LegendO[cacheRequestHitRatio]: Median Hit Ratio (60min)&nbsp;
Legend1[cacheRequestHitRatio]: Median Hit Ratio
Legend2[cacheRequestHitRatio]: Median Hit Ratio

Target[cacheRequestByteRatio]: cacheRequestByteRatio.5&cacheRequestByteRatio.60:public@localhost:3401
MaxBytes[cacheRequestByteRatio]: 100
AbsMax[cacheRequestByteRatio]: 100
Title[cacheRequestByteRatio]: Byte Hit Ratio @ zaib_squid_proxy_server
Options[cacheRequestByteRatio]: absolute, gauge, noinfo, growright, nopercent
Unscaled[cacheRequestByteRatio]: dwmy
PageTop[cacheRequestByteRatio]: <h1>Byte Hit Ratio @ zaib_squid_proxy_server</h1>
YLegend[cacheRequestByteRatio]: %
ShortLegend[cacheRequestByteRatio]:%
LegendI[cacheRequestByteRatio]: Median Hit Ratio (5min)&nbsp;
LegendO[cacheRequestByteRatio]: Median Hit Ratio (60min)&nbsp;
Legend1[cacheRequestByteRatio]: Median Hit Ratio
Legend2[cacheRequestByteRatio]: Median Hit Ratio

Target[cacheBlockingGetHostByAddr]: cacheBlockingGetHostByAddr&cacheBlockingGetHostByAddr:public@localhost:3401
MaxBytes[cacheBlockingGetHostByAddr]: 1000000000
Title[cacheBlockingGetHostByAddr]: Blocking gethostbyaddr
Options[cacheBlockingGetHostByAddr]: growright, nopercent
PageTop[cacheBlockingGetHostByAddr]: <h1>Blocking gethostbyaddr count @ zaib_squid_proxy_server</h1>
YLegend[cacheBlockingGetHostByAddr]: blocks/sec
ShortLegend[cacheBlockingGetHostByAddr]: blocks/s
LegendI[cacheBlockingGetHostByAddr]: Blocking&nbsp;
LegendO[cacheBlockingGetHostByAddr]:
Legend1[cacheBlockingGetHostByAddr]: Blocking
Legend2[cacheBlockingGetHostByAddr]:

.

Then issue following command. To get graph from new file, you ahve to run the command 3 times.

env LANG=C mrtg /etc/mrtg.cfg

then create its index file so all graphs can be accesses via single index file

indexmaker /etc/mrtg.cfg –output /var/www/mrtg/index.html –columns=1 -compact

Now browse to your mrtg folder via browser

http://yourboxip/mrtg

and you will see your graphs in action. however it will take some time to get the data as MRTG  updates its counters after very 5 minutes.

proxy - 1.

.

You can see more samples here…

http://chrismiles.info/unix/mrtg/squidsample/
http://chrismiles.info/unix/mrtg/mrtg-squid.cfg

 

.

Thank you
Syed Jahanzaib


Filed under: Linux Related

Howto get DSA Output in HTML format for IBM xSeries 3650 M4 [7915] Server

$
0
0

Recently one of our newly acquired IBM xSeries 3650 M4 [7915] Server start sending email regarding Predictive Failure (PD,PAF) alerts. and on panel, we get amber light on HDD,

2014-05-15 08.57.57

To receive support from the IBM or vendor, we have to send DSA Logs. this DSA report contains each and every detail regarding all the hardware components of the machine. In the past we used DSA logs to generate html base outputs on previous 3650 or 346 series servers, but we were unable to found any installable DSA package. Only PORTABLE or PRE-BOOT versions were available. Since it was a production live server so we cannot take downtime to boot from dsa pre-boot cd, and the portable version produce single XML file which is not human friendly or readable. So I used following trick to make its HTML output , (provided by vendor and GOOGLE)

(Make a new folder where you dsa will generate its HTML output , in any location, e.g: c:\dsa_output)

ibm_utl_dsa_dsytd3l-9.52_portable_windows_x86-64.exe -v -d c:\dsa_output

Output Sample:

dsa_output_html.

.

Regard’s
Syed Jahanzaib


Filed under: IBM Related

PTCL vDSL modem hang issue and its workaround

$
0
0

modem

Recently at a network, the operator facing PTCL vDSL modem (HUAWEI HG622) hangs frequently. The interval between hanging was different , sometimes 3-4 times daily or after 14-16 hours. Ping to the modem also timeout and when the operator restart the modem, every thing works fine, but its painful to do it manually specially in late night hours when no one at help desk to do the stupid job of resetting.

[It is also observed that PTA is actively blocking users public ips from which suspected grey traffic (Like VPN, HOTSPOT SHIELD and tunneling type applications, and specially VOIP) is passing through, so disconnecting , and reconnecting again assigns you new public ip and internet starts working again.)

The workaround I made was to

  • Try Using Good Quality UPS with automatic voltage control,
  • First configure the modem in BRIDGE mode,
  • Add pppoe client dialer in the Mikrotik ppp section,
  • Then add a simple netwatch script which keeps checking the internet connectivity at 1 minute interval) , and if it found no reply from the internet within 10 seconds from the internet (actually single host like Google dns 8.8.8.8) then it disables the default dialer (pppoe-out1) and redial the connection after 10 seconds of PAUSE/DELAY (to prevent any dial-flood). However Its more recommended to monitor your ISP gateway rather then Google DNS dueto various reasons.
  • It also sends an email to admin so that he should be aware of what happening behind his back :P . You can skip email section if you dont require notifications.

NETWATCH SCRIPT

Following is an EXPORT version of the netwatch script. You should modify it as per your local need.


/tool netwatch
add comment="Monitor Internet Connectivity 8.8.8.8" disabled=no down-script=":log error \"PTCL LINK SEEMS TO BE DOWN, Resetting PPPoE Dialer and wait for at least 10 seconds before redialing / zaib\"\r\
    \n /interface pppoe-client disable pppoe-out1\r\
    \n:delay 10\r\
    \n /interface pppoe-client enable pppoe-out1\r\
    \n" host=8.8.8.8 interval=1m timeout=10s up-script=":log warning \"PTCL LINK RE - CONNECTED, Please check and confirm / zaib\"\r\
    \n\r\
    \n/tool e-mail send to=\"your_email@yourdomain.com\" password=your_gmail_password subject=\"\$[/system clock get date] \$[/system clock get time] — PTCL DSL pppoe Dialer RE-CONNECTED AND UP NOW / zaib\" from=your_gmail_account@gmail.com server=173.194.69.109 tls=yes body=\"\$[/system clock get date] \$[/system clock get time] : PTCL Link was down, so the netwatch script disconnected the pppoe-out1 dialer and reconnec\
    ted after 10 seconds of delay. Thank you / aacable@hotmail.com\"\r\
    \n"

.

EMAIL CONFIGURATION 

You can skip this email config section if you dont want to receive notifications via email.


tool e-mail
set address=173.194.69.109 from=your_email@gmail.com password=your_password port=587 starttls=no user=your_username

Done.

When there will be no response from the internet (Google dns) , then netwatch will trigger down script section, which will disconnect the active pppoe-out1 dialer connection, wait 10 seconds, then redial the connection again and log alert and shoots email.

As showed in the image below ...

1- link up

.

2- link email

.

.

.

NOTES:

  • You can increase the interval and timeout value as per your requirement, ideally it should be a bit higher.
  • Its more recommended to monitor your ISP gateway rather then Google DNS dueto various reasons.
  • This script simply checks single host, BUT what if only Google dns is not responding, rest of internet is working fine. The netwatch will still think that the whole internet connectivity is down because its checking single host only, and will keep disconnecting/reconnecting the dialer. To prevent this, its better to create a separate DOWN script which will check at least 2-4 multiple HOSTS, including your ISP gateway , internet Hosts like Google dns, and some other reliable hosting servers.

.

Regard's
Syed Jahanzaib


Filed under: Mikrotik Related

Radius Manager Self Registration Captcha Image Not Showing

$
0
0

temp-image-error.

If you have dmasoftlab radius manager’s SELF REGISTRATION option enabled, and the user is unable to see the captcha image while trying to self register his account then check following.

Make sure /var/www/radiusmanager/tmpimages folder do exists, (this path is valid for Ubuntu , But
If you have Centos/Fedora then try with /var/www/html/radiusmanager/tmpimages

If tmpimages is not present then create it, and assign it proper permissions for the WEB server user.
Example: [ubuntu]

  • mkdir /var/www/radiusmanager/tmpimages
  • chown /var/www/radiusmanager/tmpimages

.

Example: [Centos, Fedora]

  • mkdir /var/www/html/radiusmanager/tmpimages
  • chown /var/www/html/radiusmanager/tmpimages

 

Now check again and you will see the images showing properly.

 

captcha

.

.

Regard’s
Syed Jahanzaib

 


Filed under: Radius Manager

Symantec Backup Exec Reference Notes

$
0
0

13132031-1874

Recently we upgraded our SAP infrastructure with new IBM xSeries server and also replace the old IBM tape library TS3200 with TS100. In previous Windows 2003, we were using classic NTBACKUP solution to take backup on TAPE library system, but with the new windows 2008 R2 upgrade, we found that that the tape drive support have been removed from the new Server Backup tool. Therefore we were looking for some reliable backup solution which can facilitate our tape library. Finally after searching a lot, we selected SYMANTEC BACKUP EXEC 2012 (with SP4 and latest patches) as our backup solution. Last year We tested its demo and it was fulfilling our requirements and fitting under our budget. I did it’s installation and it went smooth without any errors, but it took me few days to understand how it actually works. Its GUI interface looks pretty much simple and easy to navigate, but I found it very typical to configure Tape Library for auto loading function according to job/day.

Following is a short reference notes I am posting. I will keep updating with day to day tasks and issues I face and how I manage to solve them. Symantec have great number of guides, postings at there site too, but sometimes its hard to find the correct solution when its kinda urgent.

:)

The VSS Writer timed out (0x800423f2), State: Failed during freeze operation (9)

If backup failed with following error:

———————————————————————-
V-79-57344-6523314.0.1798.1364eng-systemstate-backupV-79-57344-65233ENRetailWindows_V-6.1.7601_SP-1.0_PL-0x2_SU-0x112_PT-0×3 – Snapshot Technology: Initialization failure on: “\\AGPSAPDEV\System?State”. Snapshot technology used: Microsoft Volume Shadow Copy Service (VSS).
Snapshot technology error (0xE000FED1): A failure occurred querying the Writer status. See the job log for details about the error.

Check the Windows Event Viewer for details.

Writer Name: COM+ Class Registration Database, Writer ID: {542DA469-D3E1-473C-9F4F-7847F01FC64F}, Last error: The VSS Writer timed out (0x800423f2), State: Failed during freeze operation (9).

Writer Name: Windows Management Instrumentation, Writer ID: {A6AD56C2-B509-4E6C-BB19-49D8F43532F0}, Last error: The VSS Writer timed out (0x800423f2), State: Failed during freeze operation (9).

The following volumes are dependent on resource: “C:” “E:” .
The snapshot technology used by VSS for volume C: – Microsoft Software Shadow Copy provider 1.0 (Version 1.0.0.7).
The snapshot technology used by VSS for volume E: – Microsoft Software Shadow Copy provider 1.0 (Version 1.0.0.7).
14.0.1798.1364eng-systemstate-backupENRetailWindows_V-6.1.7601_SP-1.0_PL-0x2_SU-0x112_PT-0×3

        Job ended: Wednesday, June 04, 2014 at 2:49:03 AM
Completed status: Failed
Final error: 0xe000fed1 - A failure occurred querying the Writer status. See the job log for details about the error.

———————————————————————-

 

issue this command and see if any writer is failing

vssadmin list writers

vss-writer-error.

if System Writer is TIMED OUT, then simply a system restart would fix the error auto. In my case , windows did some udpates, and reboot fixed the above error.

 


Filed under: Symentec Related

Non Payment Reminder for Expired Users in RADIUS MANAGER 4.x.x

$
0
0

123

As per requested by many friends, Following is an short guide on howto configure payment reminder for Expired users in DMASOFTLAB RADIUS MANAGER 4.x.x
[I wrote this guide because its better to explain in details with snapshots here, rather then explaining every individual)

This guide will demonstrate that if the user account is expired, he still can login to your Mikrotik / NAS, but when he will try to browse, he will be redirected to Non Payment page showing why his access is blocked. Useful in many scenarios.

Scenario -1 :

[Simple one]Mikrotik as pppoe server

LAN IP + DHCP POOL = 192.168.1.0/24
Local Web Server IP = 192.168.1.10
PPPoE IP Pool = 172.16.0.0/24
EXPIRED IP Pool = 172.16.100.0/24
WAN IP = 1.1.1.1

RADIUS MANAGER CONFIGURATION

  • Create a new service according to your requirements, like 1mb / 1 month limitation
  • in IP pool name , type expired
  • in  Next expired service optionSelect EXPIRED as next master service, So when primary service expires, user service will be switched to this one. [Note: EXPIRED service is already available in RM by default, but if you are unable to find it, then you can create it manually, just add new service with EXPIRED name and set ip pool accordingly)

As showed in the image below …

 

1.

.

Now Create a user in users section and bind it with the new service you just created above that is 1mb / 1 month limitation

.

.

.

 

MIKROTIK CONFIGURATION

.

Add IP POOL for Expired Users

Add new IP Pool for EXPIRED pppoe users,


/ip pool

add name=expired ranges=172.16.100.1-172.16.100.255

 

As showed in the image below …

pool.

.

Enable WEB PROXY and add rules

Now enable WEB PROXY and add deny/redirect rule so that we can redirect the EXPIRED users pool to any web server showing the non payment reminder page. You can also use EXTERNAL proxy to do the redirection like squid proxy. but in this guide i am showing only the mikrotik level things.


# First Enable Mikrotik Web-Proxy (You can use external proxy server also like SQUID)
/ip proxy
set always-from-cache=no cache-administrator=webmaster cache-hit-dscp=4 cache-on-disk=no enabled=yes max-cache-size=unlimited max-client-connections=600 max-fresh-time=3d max-server-connections=600 \
parent-proxy=0.0.0.0 parent-proxy-port=0 port=8080 serialize-connections=no src-address=0.0.0.0

# Add rule to allow access to web server, otherwise user wont be able to access the reminder page. this rule must be on top.
/ip proxy access
add action=allow comment="Allow acess to web server so expired users can view the payment reminder page. it can be locally hosted or external (on internet) as well." disabled=no dst-address=192.168.0.10 \
dst-port=""

# Now add rule to redirect expired ip pool users too local or external web server payment reminder page.
/ip proxy
add action=deny disabled=no dst-port="" redirect-to=192.168.0.10/nonpayment/nonpayment.htm

As showed in the image below …

access

.

.

.

Add FIREWALL REDIRECT rule in NAT SECTION

Now add REDIRECT rule in FIREWALL/NAT section, and add only pppoe users pool in default NAT rule.
This is to make sure that users with expired users are redirected to web proxy which will be deny there request and redirect to web server reminder page.
and also add pppoe valid users pool in default NAT rule src-address, so that only valid pppoe users can browse the internet.
As showed in the image below …

 

3.

.

.

 

RESULT

Now when the client primary profile expires, it will switch to NEXT MASTER SERVICE which we configured to EXPIRED, thus he will get ip from EXPIRED pool, and then mikrotik will redirect to proxy which will deny its request and redirect to local payment reminder page.
As showed in the image below …

 

result

.

.
SQUID PROXY RULE TO BLOCK EXPIRED POOL RANGE

in squid.conf add these on before other ACL. (or on top)


acl expired-clients src 172.16.100.0/24
http_access deny expired-clients
deny_info http://web_server_ip/nonpayment/nonpayment.htm expired-clients

Note: Ideally web server should be on same subnet.

.

.

 

Regard’s
Syed Jahanzaib


Filed under: Radius Manager

Radius Manager Dealer Panel

$
0
0

In Radius Manager, we have an option to add MANAGER (Dealer) so that the dealer can have access to his own management panel (similar to ACP but with some limitations). The dealer can create new users, disable , add deposit/credit in user account, invoice access and stuff like this.

You can assign various permissions to the dealer as per requirements. Following is an example of creating NEW MANAGER with minimum rights.

Goto Managers , and select NEW Manager

As showed in the image below …

d3.

Assign necessary permissions, this is important :)

d2.

Now by default this dealer will have zero balance, so he wont be able to add credits in users account (although he can create new accounts but these accounts are by by default EXPIRED, so in order to renew user account, the dealer MUST have deposit in his account)

Now add some AMOUNT in his account. Open Manager and edit that dealer.
As showed in the image below …

d1

.

Now test it via login with dealer ID and add new user. by default the new user added will be expired, and the dealer must add credit in user account. (He can also add DEPOSIT, but then user have to himself login with his user id and password to user management panel and refresh his account (with the deposited amount added by dealer).

As showed in the image below …

d4.

d5.

d6.

.

.

Binding Dealer to Use Only Specific Services

You can also bind specific Service with specific Dealer too. for example You dont want Dealer A to use all services, instead you want to show him specific services only. Login to ACP using ADMIN, goto Services, Open your desired services that you do or dont want to to be displayed at Dealer A panel,

As showed in the image below …

d7

.

result can be seen here…

d8

I will write more in some free time.

.

Regard’s
Syed Jahanzaib


Filed under: Uncategorized

IBM Storewize v3700 SAN Duplicate partitions showing in Windows 2008

$
0
0

v3700

Recently one of our IBM Xseries 3650 M4 server faced hardware failure related to local storage. Two partitions from IBM Storwize v3700 were assigned to this system, connected with 2 QLogic FC cards connected with 2 BROCADE fiber switches for fail over.

After doing re installation of Windows 2008 R2, SAN partitions were appearing duplicate. Windows MPIO feature was enabled but still partitions were twice appearing. After applying IBM base SDDDSM MPIO updated driver, problem got solved.

Subsystem Device Driver Device Specific Module (SDDDSM) is IBM’s multipath IO solution based on Microsoft MPIO technology, it’s a device specific module specifically designed to support IBM storage devices. Together with MPIO, it’s designed to support the multipath configuration environments in the IBM Storage.

Download link is as follosw. Just a small patch , apply and restart :)

http://www-01.ibm.com/support/docview.wss?uid=ssg1S4000350#Storwize3700

 

DESCRIPTION
DOWNLOAD
RELEASE DATE
CERTIFIED
Platform Windows Server 2008/2008
(R2 / 32bit /64bit)
SDDDSM v2.4.3.4-4
SDDDSM 2.4.3.4-4 for Windows Server 2008
English
Byte Size 577711
8/16/13
Yes

 

Regard’s
Syed Jahanzaib


Filed under: Uncategorized
Viewing all 409 articles
Browse latest View live