Forum Replies Created
This happens when an e-mail is sent by a Microsoft mail client (like Outlook) using Rich Text Format, and the recipient is unable to parse the non-standard format used by Microsoft to send such messages. The solution is to switch to HTML or Plain Text.
BTW, typing “winmail.dat attachment” into Google instantly returns hundreds of hits, all giving you this exact information as well as detailed instructions on how to change the e-mail format in Outlook.Jan 19, 2016 at 5:27 pm in reply to: Does anybody actually use the "port-mirroring" feature on switches? #388817
It’s useful in any scenario where you need to inspect traffic flowing between other ports on the same switch, such as when you install an IDS.
So yes, it’s commonly used as a security feature in the sense that it can be part of a monitoring system.
Part of the problem is that you’ve converted the disk to the “Dynamic Disk” partitioning scheme. This Windows-specific partitioning system allows for dynamic extension of partitions into (possibly non-contiguous) free space on any available “dynamic” disk, but with one notable exception: Partitions created prior to the “conversion” can not be extended.
Windows 2012 actually supports dynamic extension of the system partition (and other partitions) into contiguous free space, even with the old MBR scheme. This is particularly useful with virtual disks, since one can simply grow the disk file and then extend the partition, usually without a reboot.
If you still have only the one “C” partition on the virtual disk, you might want to consider converting it back into an MBR disk. Contrary to Microsoft’s claims, it is possible to do this without reformatting the drive, provided there’s only a single partition on the drive AND that partition was created prior to the conversion. If you simply change the partition type from 42 (Windows SFS) to 7 (HPFS/NTFS) using a suitable third-party partitioning utility, like GNU/Linux fdisk, the drive will once again appear as an MBR disk in Disk Manager, and you’ll be able to grow the drive and extend the partition.
Since this is a virtual disk, you can take a snapshot before you start manipulating the partition table, and simply roll back the changes should anything go wrong.Jan 12, 2016 at 3:25 pm in reply to: Accounts with disconnected sessions are locking out. #388815
Account lockout happens when repeated authentication requests are being made with the wrong password. A disconnected RDP session should not generate any logon requests at all, unless of course there’s an application running in that session that keeps trying to log on, like, say, an e-mail client..
What does the security log on the domain controller(s) say? There should be a number of authentication failure events related to the affected account(s), and in addition to the username, the log entry should contain the name and the IP address of the client trying to authenticate.
Without knowing the name of the application and the exact error message, it’s impossible to say what the problem could be.
You could just run Process Explorer ( https://technet.microsoft.com/en-us/…sexplorer.aspx ) in another session while you’re trying to start the application, and see what’s actually going on. Process Explorer will show you which files and registry keys are being accessed, and you can filter on process name and successes/failures and a bunch of other stuff.
Compare the log from an admin session with that of a non-admin session, and you should be able to figure out what the problem is.Jan 11, 2016 at 1:02 pm in reply to: Enabling logs for tracking dns and ip address changes in machines #388812
The answer to that is “not really”, for several reasons.
By default, it seems the iphlpsvc in Windows 7 doesn’t log anything related to static IP configuration. I don’t know how or even if that can be changed.
But here’s the main issue: Even if you can enable logging of such configuration changes locally, someone with administrative privileges on the system could easily clear the logs and/or turn logging back off again. And since you speak of users changing various IP-related parameters, am I correct in assuming that these users do indeed have administrative rights on their PCs?Jan 11, 2016 at 12:49 pm in reply to: Mapping drives automatically to different servers. #388811
If you place each server in a different Site (and create and configure appropriate subnet objects for each site), what you describe will happen automatically.
When several replicas exist for a DFS Folder, clients will by default be directed to the replica in their own site. Of course, if you have a Site without a server hosting a DFS replica, the client will be directed to another server chosen basically at random.
I guess you could control the drive mappings via GPO using item-level targeting, but then you would have to map directly to the shared folder on a server rather than the folder in the DFS namespace. Item-level targeting is available for every GPO stting under “Preferences” and it’s an extremely powerful and flexible mechanism. You can create fairly complex conditions based on Site, OU, IP address, group membership and lots of other criteria.
But in this particular case it does sound like DFS and properly configured Site objects would be the better solution.
When you’re trying to ping, are you attempting to do so from the router itself? If so, remember to do an extended ping and select Fa0/0 as the source interface, otherwise the source address won’t match the Phase 2 definition (access-list 102).
I have to ask: Is your Phase1 PSK really “cisco”? If it is, you really, REALLY should consider changing it immediately. Not only will it be in every key dictionary in the world, but I know of some non-Cisco IPsec implementations that won’t accept such a short key.
Everything else IPsec-related looks OK, apart from the fact that you’re using horribly outdated and unnecessarily slow cryptographic algorithms (3DES is old and more CPU-intensive than AES; SHA1 is deprecated and rather insecure).
There are a number of issues with the INBOUND_FILTER access list. For instance, permitting inbound traffic from 10.44.48.0/24 on FastEthernet 0/1 makes no sense, unless you really like spoofed packets. Also, you’ve replaced some “host” matches in some ACEs with the text “Public IP”, so I can’t say for sure if those entries are correct or not. If you’re referring to the other endpoint’s IP address, then all should be well. I can see nothing else that could prevent an IPsec Phase2 ESP tunnel from working (and indeed your config does work in my PT lab setup).
Hosts suddenly being unable to communicate with other hosts outside their (V)LAN indicates a communication problem between the problematic hosts and the local gateway.
It’s very unlikely that this has anything at all to do with IGMP or multicasting on these hosts. VRRP does indeed communicate using link-local multicast addresses, but this communication does not involve any hosts, only the routers.
However, VRRP creates virtual MAC addresses for the gateway(s), and since the problem can be temporarily resolved by flushing the ARP cache, you’ve pretty much narrowed it down to an ARP issue. If I had to guess, I’d say your VRRP setup isn’t configured correctly, or you’re using another standby protocol that relies on gratuitous ARPs rather than virtual MAC addresses.
What is the MAC address associated with the gateway IP? Since you’re using VRRP, it should be “00-00-5E-00-01-“, and should remain the same at all timesbiggles77;n490157 wrote:Go to Search–>Advanced and then tick Last Visit and click on Forums (as per the attached image) Please let us know if this is not good enough for what you want. I have only tested it once under one browser unfortunately saw posts that I had already read.
I, too, just got an extremely long list of posts in chronological order.
I’m really puzzled by this. You’d think that being able to see only posts you haven’t previously read would be considered core functionality in a forum? I don’t know about you, but I for one go to a forum to see what’s happened since the last time I was there, not to re-read old posts.
I also don’t feel like manually visiting every subforum just to see if by any chance there might be something there that I haven’t already seen. That’s precisely the kind of repetitive busy-work we want computers to eliminateSer Olmy;n489958 wrote:All I’m trying to do is this: When logged in, I’d like to be able to view a list of all new (that is, unread by me) posts in all forums. This is still supposed to be possible, right?
Quoting myself to point out that apparently the answer is currently “no”. The “Latest Activity” tab shows just that: a chronological list of posts, unread or not.
As for the “New Topics” filter, it does indeed work, just not the way I thought it did. “New Topics” means just that: new topics, that is, new threads. Selecting it will filter out replies to existing threads.
I’ve searched through every link, button and setting, and have yet to find a way to view only unread posts. This was trivially easy in the old forum.
I’d be most grateful if an admin could clarify whether this functionality is supposed to exist or not.Tim Speciale;n489964 wrote:Serious question: Why is this bad?
Because it gives you the impression that your post is now on page (X), when in fact it will appear to everyone else (including yourself after a quick refresh) to be at the bottom of page (!X).Tim Speciale;n489964 wrote:If you’re reading through a thread and find a post on page 2 of 10 and you reply to that post, there are still 8 more pages you haven’t read. Why force the user to the end of page 10? I’ve actually hated that about forums for a decade.
As Blood said, replying to a post before reading the entire thread may not be such a great idea. If you want to reply and then keep reading, you can always reply using a new tab.
And it’s not like the current “feature” is genuinely helpful. Had the reply appeared immediately below the post you’re replying to, perhaps indented or in some sort of threaded view, it would at least have been semi-useful. But no, it appears as the last post on the current page, below some other random post (exactly which one will depend on your page view setting). That just has to be a bug.
[QUOTE=tmd;n489481Mark Channels read don’t work for Ser Olmy
I haven’t looked at the client-side code, but are you saying that the filtering takes place in the browser? If so, that’s insane. :)
Anyway, I’ve stayed away for the last two weeks or so, hoping that most bugs would have reared their heads and perhaps been dealt with by now. I’m still having issues with the “New Topics” filter, though. The problem is quite simple: No matter what I do, the “New Topic” filter seems to do precisely nothing. I always see every post matching the other filter criteria, including posts I’ve manually read. This happens in every browser I’ve tried (IE, Firefox, Pale Moon, Chrome). Could the issue be related to my account?
All I’m trying to do is this: When logged in, I’d like to be able to view a list of all new (that is, unread by me) posts in all forums. This is still supposed to be possible, right?
Edit: Seems there’s a lot of client-side nonsense^W (hey, whatever happened to the strikethrough tag?) scripting going on with this forum. Try this for laughs: Find an old post in a multi-page thread, one that doesn’t appear on the last page of the thread, and reply to it. Result: Your reply initially appears on the same page as the post you replied to, rather than at the end of the thread. This is strictly cosmetic, though; reload the page/thread, and your post is suddenly at the end, where it belongs.
Perhaps there’s an option somewhere that I’m not aware of. This is what I’ve tried so far:
– Pressed the “Mark Channels Read” link on the front page
– Pressed the “Mark Channels Read” link on the front page with the “latest activity” tab active
– Entered a subforum/channel and pressed the “Mark Channel Read” link at the bottom of the page
– Manually read the posts in the channel in question
None of the above prevented the posts(s) from the channel from showing up in the “latest activity” tab with the “New” filter on.
(Also, I think it would be great if the search results were presented in a format resembling a channel/subforum, instead of in a format only seen on that tab.)Tim Speciale;n489352 wrote:If you click on the “Mark Channel Read” link on the home page it should mark the entire forum as read, thus making the “new posts” link work.
Tried that, but unfortunately it didn’t work.
When I log in and go to the “Latest Activity” tab, I get a long list of articles. The list is the same whether I activate the “New Topics” filter or not.
Pressing the “Mark Channels Read” button causes the page to reload, but other than that it doesn’t seem to do anything. I tried reading a post manually and then reloading the “Latest Activity” tab, expecting post to no longer be listed as “new”, but that didn’t work either.
I’ve been through my fair share of forum upgrades, and of course it always takes a while to get accustomed to a new design, but this is one case where I find the upgrade has made the forum almost intolerably difficult to use (for me, that is). But enough complaining; you asked for bug reports, not user feedback. :)
The “New” filter doesn’t seem to do anything useful. Prior to the upgrade, I logged in at least once a day and read/skimmed all “new” posts (there used to be this really nice link from the main forum page, and I also seem to remember a very useful menu …), but when I use the “new” filter on the “latest avtivity” tab, I get pages and pages of posts dating back months.
The “subscriptions” tab is empty. I’m pretty sure I used to be subscribed to a few dozen threads.
The login menu doesn’t appear in Pale Moon (a Firefox-based browser), so I had to use IE to post this message.
Edit: Now I can log in using Pale Moon, and the “my subscriptions” tab is no longer empty. That leaves only the “new” filter, which is still returning lots of old threads that I know I’ve read.Apr 25, 2015 at 9:16 am in reply to: Managing off-site machines with only a corporate DC #388801
Re: Managing off-site machines with only a corporate DC
You’re quite right in that problems with broken trust relationships between PCs and Domain Controllers were basically unheard of in the NT/2000/2003 era, even though, as you say, the password interval was much shorter back then.
I think it’s fair to conclude that:
- the password change interval is not directly related to the issue, otherwise any computer left without network connectivity for n days, where n is the interval, would immediately experience a broken trust relationship (and as you say, the interval used to be 7 days)
- the password update mechanism is somehow related to the broken trust relationship, as the problem doesn’t occur on systems that regularly contact a DC to change the computer account password
“Trust relationships” must be related to encryption/signing keys, and as with all keys in Kerberos-based systems, they expire after a time and must be rotated. My guess is that a password update forces rotation of these keys or perhaps re-signing of , just as it does with EFS keys tied to user accounts.
Speculation: Perhaps the broken trust is the result of a failed password update on a computer account where the flag “password must be changed at next logon” is set. After all, when that happens to a user account, encryption keys are lost.
Like you, I can’t remember ever having this problem on Windows NT or Windows 2000 systems. First time I ever saw it was on a Windows XP system in a Windows 2003 domain, but it’s only in Windows Server 2008 R2-based domains with Windows 7 clients it’s been a frequent problem.
This sums up my experience with broken trust relationships:
- Pre-Windows 2003 domains with pre-Windows XP clients: The problem was unheard of.
- Windows 2003 domains with XP or Vista clients: I’ve seen it only a handful of times, and in those very few cases it could very well have been caused by some random/unrelated client issue.
- Windows 2003 domains with Windows 7 clients: I only have 5 clients still running a setup like this (2003 SBS, mostly), and there are no more than 20 PCs in each network. But for what it’s worth, none of them have ever had this problem, even though their laptops are frequently disconnected from the domain for days and weeks at a time.
- Windows 2008-based domains with XP/Vista/7 clients: I don’t have enough data to draw any conclusions. Most clients with 2003 DCs dragged their heels to the extent that by the time plans for migration were approved, 2008 R2 was available and they just skipped 2008 altogether (on the DC side). I’ve had few clients that were using 2008 SBS and a mix of XP, Vista and 7 clients, and while I’m not exactly fond of 2008 SBS for a variety of reasons, broken workstation trusts isn’t one of them.
- Windows 2008 R2-based domains with XP or Vista clients: Hardly ever saw the problem.
- Windows 2008 R2-based domains with 7 clients: Trust relationships are lost frequently enough for it to be a real nuisance.
You could very well be right about the problem first appearing in Vista, but going unnoticed until 7 became prevalent. I’ve actually sold and installed a significant number of PCs with Vista Business, but the vast majority of them were desktop systems. For performance reasons, laptop users wanted to stick with XP, which means the Vista systems were mostly using a permanent, wired network connection.Apr 24, 2015 at 6:58 pm in reply to: Managing off-site machines with only a corporate DC #388800
Re: Managing off-site machines with only a corporate DCcruachan;291622 wrote:This is not correct, as the password change for the machine account is initiated from the client side (only when connected to the network, so no changes if it can’t contact AD) and not the AD side.
This has nothing to do with how often a password change is initiated by a client, it has to do with how long it takes for a computer account password to expire when the client fails to update it. That’s controlled by an AD setting, and assuming the defaults apply, it’s enough to keep a PC disconnected for between 4-8 weeks (depending on when the password was last changed) to trigger this condition.
That’s why the Internet is littered with posts about computers (apparently) losing their connection to an AD domain. If you’ve operated AD networks and have never experienced this, I believe you’re in a relatively small (and lucky) minority.
For reasons I haven’t bothered to research in any detail, Windows 7 systems are much more likely to get out of sync than previous incarnations of Windows. I (and many others) have even seen desktop PCs with a wired network connection fail to update their computer account passwords in a timely fashion.
Edit: I see that the article in the link actually says that computer account passwords never expire. That is not correct. OK, technically, the computer account password doesn’t expire as such, and it is indeed not subject to any of the GPO password settings affecting user accounts, but if a computer account is unused for an extended period of time, the trust relationship expires and the only way to fix it is to, well, reset the account password.Apr 24, 2015 at 6:08 pm in reply to: Managing off-site machines with only a corporate DC #388799
Re: Managing off-site machines with only a corporate DCcruachan;291619 wrote:Normal PCs/laptops/member servers do NOT tombstone/drop off the domain or whatever else you want to call it because they haven’t contacted a DC for a period of time.
The computer account object of a PC may not get tombstoned, but the password will expire, causing the computer to effectively drop out of the domain.
In that scenario, the computer account object still very much exists in AD, and the PC will appear to be a member of the domain, but the security logs on both the DC and the PC fills up with authentication errors and users will no longer be able to log in using domain accounts (unless the PC is disconnected from the network and locally cached credentials are available for the account in question).
Re: Spamhaus, UCEProtect, CBL – are they worth it?
Non-delivery reports often state the name of the blocklist or filtering service responsible for rejecting an e-mail. If the blacklisting seems a complete mystery, contacting the server admin or the company responsible for the filter/blacklist could provide som answers.
In my experience, the DNSBL providers are more than happy to tell you exactly why a server or domain was blacklisted. In fact, Spamhaus will tell you which out of a number of different lists you’re on, with links that explain what each list does, how one might end up on it, and how one should go about troubleshooting the issue and getting de-listed.
If a server or domain gets blacklisted repeatedly, there must be a reason. It is not by any means a common occurrence for servers or domains to be erroneously blacklisted. In your case, you mentioned it could be related to your web hosting provider, and if that is indeed the case, you might consider taking your business elsewhere.
Without realtime blocklists we would all be drowning in spam. Last time I checked, spam constituted more than 90% of all e-mail traffic worldwide. The reason our inboxes aren’t completely flooded with junk, is that every single e-mail provider (Google, Microsoft, Yahoo etc) and ISP use blocklists. That also means that if there was a general problem with the quality of these lists, just about everybody with an e-mail account anywhere would be affected.Apr 24, 2015 at 12:13 pm in reply to: Managing off-site machines with only a corporate DC #388797
Re: Managing off-site machines with only a corporate DCkingbear2;291612 wrote:Right, but then if corporate internet is down, they lose all internet, unless I set the secondary DNS as 18.104.22.168 or OpenDNS.
And if you specify a public secondary DNS server, the clients will keep using it even after the VPN comes back up, and won’t be able to resolve names on the corporate network until they either reboot or disconnect/reconnect to reset their DNS settings. Not recommended at all.kingbear2;291612 wrote:This is interesting. We have meraki’s in there so I will contact them and see if it’s possible.
The feature you’re looking for, is called ‘conditional forwarding’. Most DNS servers are capable of this, including the caching DNS servers found in many routers. The question is whether or not your router has a proxy DNS feature, and if so, if the manufacturer has included settings related to conditional forwarding in the management GUI.kingbear2;291612 wrote:I don’t suppose there is any way of doing this from the hosts file, is there?
Nope. The hosts file can contain static A record entries, but not directives related to DNS servers.
Re: Spamhaus, UCEProtect, CBL – are they worth it?Blood;291613 wrote:Our office IP address has never been blacklisted. The blacklisted IP’s have always been the website IP addresses.
And I guess the web site is accessed using the same domain name as the one you’re using for e-mail, which is why your mail flow is affected.
If your web server was blacklisted by a reputable RBL provider, there are only two possibilities:
- Your web server was at some point involved in spamming or the distribution of malware
- The hosting provider has allowed one or more spammers to use their services for an extended period of time, and have ignored repeated abuse reports, causing their IP range to be blacklisted
There’s only one slight problem with scenario #2: If your web server IP was blacklisted, that wouldn’t necessarily affect your domain name. And as long as the web server isn’t involved in e-mail traffic, mail flow shouldn’t really be affected either.
It is possible that the domain name got blacklisted if there’s a PTR record pointing to the web site FQDN from the blocked IP address, and then I guess it is also possible that content-based filters could pick up on that and block/filter mails from that domain or mails containing links to the web site, but that’s not a common scenario at all.
There’s something about this that doesn’t sound quite right.
When you found you were unable to send e-mails due to this blacklisting, what was the exact error message in the non-delivery report(s)?
Re: Spamhaus, UCEProtect, CBL – are they worth it?
Was the blacklisted IP address your own, or one used by the filtering service?
Would an outbound SMTP connection from an internal (and possibly infected) PC be blocked by your router/firewall? Would the attempt, successful or not, be logged?
If the security of your mail server was somehow compromised and someone were able to install a bulk e-mail client with its own SMTP engine, would you be able to tell what was going on?
A little over a year ago, a client experienced a security incident where an unsecured account was used by an outsider to log in to a local server. (Specifically, the “ftp” account had a valid shell and a simplistic password, when it should have had neither.) As a result, an outsider was able to install bulk e-mailing software, which he or she then used manually by logging in at irregular intervals to send out a few million spam e-mails.
Each time the deluge of outbound spam had stopped by the time the local admin became aware there was a problem due to the server suddenly being blacklisted. He didn’t see anything in the SMTP logs (since the local SMTP service wasn’t actually involved) and also didn’t spot the software in the ftp home folder, so for weeks this client insisted they were the victims of repeated, baseless blacklistings by incompetent/malicious blacklist providers on the Internet. As you can see, that turned out not to be the case.
Re: Spamhaus, UCEProtect, CBL – are they worth it?
I understand your position, I really do. But the fact is that it should be almost trivially easy for anyone to avoid ending up on a realtime blocking list:
- Make sure your public DNS records are valid.
- And speaking of DNS, make sure your domain has SPF records listing the IP addresses of your SMTP servers. That will prevent other spammers from using your e-mail addresses in spoofed “From” fields, something which in turn could cause some rubbish filtering systems to incorrectly block e-mails from your domain.
- Don’t just scan inbound mail. You should run outbound e-mails through a spam filter as well, to catch mails from infected client systems.
- If you’re NATing clients behind the IP address of a router or firewall, block outbound SMTP on port 25 from any internal address other than that of your mail server. This is particularly important if you use the same public IP address for both NAT overloading and outbound SMTP.
- Consider routing outbound e-mail through your ISP’s Smart Host. They will typically have their own scanning/filtering system, which you then get to use at no extra cost.
- Log outbound SMTP traffic (ports 25 and 587); don’t rely on the server logs alone. NetFlow is great for this, but if your router doesn’t support NetFlow, a mirror/monitor port on a switch and something like Snort or NTOP will do just fine.
Sure, even if you do all this, you may still encounter scenarios where a client or partner organization has made a particularly poor choice in selecting a mail filtering service or product and as a result, your mails end up being blocked for absolutely no good reason. That can usually be fixed pretty easily by politely requesting that the server operator on the receiving end add your server to a whitelist. In any case, such corner cases will only affect mails to one specific organization or user.
Re: Spamhaus, UCEProtect, CBL – are they worth it?Blood;291587 wrote:First – No ‘spam’ infections or evidence of spamming activity was found on any of these occasions when checked after one of our IP’s had been blacklisted. Secondly, the phishing incident was identified by our hosting provider and not by any of the blacklisting organisations (the affected IP address was not blacklisted).
We use Heart Internet to host our websites. This is the same provider that immediately shut down the website. I have to admit that I have no idea how reputable they are but based on our own experience of how seriously they take security, and how helpful they have been with all aspects of their service I have no reason to suspect they are slack.
As I said, there are bad actors trying to profit from spam by providing DNSBL services of poor quality or, as you mentioned in another post, by charging a fee for putting IP addresses on a ‘whitelist’. If you were affected by any of those lists, the fault lies with the operators of the receiving mail servers for using services from shady companies that are no better than the spammers themselves.
Having said that, most network and server admins using DNS blocklists do indeed take the time to evaluate the quality of a blocklist before including it in their spam filtering strategy. If you find that you’re unable to send e-mails to a significant portion of Internet users due to blacklisting, you’re probably listed by at least one of the three major DNSBL providers, which are SpamHaus, SpamCop and SORBS.
Now, consider how an IP address might end up on one of those lists:
- If multiple spam e-mails are sent from your IP address to one the DNSBL provider’s spam traps, the address will be placed on a list of known spammers for at least 48 hours
- If multiple automatic spam reports are received from multiple spam filtering gateways at different organizations (usually more than 10), the address also gets placed on the ‘known spammers’ list for a specific period of time (again, 48 hours if the commonly used interval)
- If a scan reveals the IP address to be hosting an open SMTP relay, it ends up on the list over open relays until a subsequent scan indicates the opposite
- If a server is found to be hosting an open web or SOCKS proxy service, it ends up on the ‘open proxy’ list
- If an IP address hosts a web site advertising or selling products that are marketed through spamming, the address (and occasionally also the domain name) is put on a blocklist
- If a web server is hosting malware, it ends up on a blocklist
- If traffic generated by malware if found to be originating from an IP address, the address is added to a list of infected systems
- If WHOIS information indicates that an address range is used for dynamic IP allocation (and as such should never be hosting legitimate SMTP services), the range is added to a ‘network policy’ block list
- If the DNS records for an address or a domain are found to be invalid (MX records pointing to IP addresses instead of hostnames; PTR records pointing to non-existent hostnames; an invalid e-mail address in the SOA record; A records pointing to invalid IP addresses), a domain or IP address may be added to a blocklist
- And finally, if a service provider is found to be actively and persistently assisting spammers in trying to get around any of the measures above, IP addresses allocated to that service provider may be manually added to a ‘known spammers’ list
Since SMTP (and HTTP/HTTPS) traffic is TCP based and thus cannot be used from a spoofed IP address, how would the IP address of a legitimate, non-spamming, non-malware-spreading organization end up on one of those lists? Unless they got an address or address range that was already blacklisted due to the actions of a previous owner, I don’t really see how that is possible.
I’m not saying that incorrect blacklisting by a reputable DNSBL provider is absolutely impossible and cannot ever occur, because obviously no system is perfect and 100% bug free, but as I said: In 15+ years as a consultant and network admin I’ve yet to see a case of that actually happening.Blood;291587 wrote:I understand the thinking behind blacklisting an entire IP range, but I still regard it as prehistoric. It adversely impacts the innocent. My view is that it is not for the blacklisting organisations to ‘police’ these services, instead it should be up to the services one level above the providers, or those at the top of the chain who assign the IP addresses in the first place. It is their responsibility to ensure the organisations who manage these services are up to the task.
I and others use DNSBL providers because we want them to identify likely sources of spam. It is definitely their job to ‘police’ these services, just as it is the job of anti-virus vendors to ‘police’ the spreading of malware.
Senders of spam or malware are often ‘innocent’ in the sense that they are completely unaware that their PCs or servers are being used by spammers, or that their service provider is allowing spammers to play musical chairs with IP addresses in their range, but if you’re operating a server on the Internet, you can’t really afford to be ignorant of these issues.
As for Tier 1 providers, they are already actively assisting in the fight against spam and malware by filtering malicious network traffic. However, they do not concern themselves with the content of the packets, only with traffic which is clearly disrupting legitimate IP services, such as (D)DoS attacks.
In other words, they stick to ‘policing’ Layer 3 traffic, which I believe is a very sensible approach for a provider of high-bandwidth Layer 3 connectivity. They will only block low-volume traffic if they receive reports of malicious activity from peering partners or downstream ISPs (their customers), or if government agents turn up on their doorstep with a court order or one of those infamous National Security Letters (in the U.S. that is).
Re: Spamhaus, UCEProtect, CBL – are they worth it?
Allow me to provide a contrary opinion on this matter.
First, let me say that there has been and still are some less-than-fully-professional RBL services out there. For instance, one service that shall not be named insists on blacklisting mail servers using IP addresses without PTR records. Not only are such records not required by any standard, but the presence or absence of such records says absolutely nothing about whether a mail server is likely to send spam or not. That type of filtering is obviously counterproductive, as it will result in a lot of false positives.
Neither SpamHaus or SpamCop have ever done anything like the above. Their blacklists are well-documented and based on good data and sound reasoning. I’ve used one or both of these RBL providers on every mail server I’ve ever installed, and in 15+ years I have yet to experience any real issues with the quality of their services.
Yes, I’ve certainly had to deal with blacklisted mail servers on numerous occasions, and of course the customer was up in arms about their mails being rejected by other servers using these blacklists. However, in absolutely none of these cases were the RBL providers to blame for the blacklisting.Blood;291435 wrote:We have had two of our website IP’s blacklisted a couple of times in the last 18 months. We employ a web-master to manage our websites and he updates them and troubleshoots any issues for us. When we receive a notification I contact him and ask him to check the affected site.
We have been blacklisted 4 or 5 times in total. Once one of our websites had been hacked and was being used for phishing but it was the web host who identified it and shut the site down immediately. We cleared it up and had the site back online within a few hours.
That’s an excellent example of blacklists working exactly as intended. Your web server must have had serious security issues, and you should be asking your webmaster and/or server administrator some hard questions. Things like that don’t just happen.Blood;291435 wrote:Apart from that the notifications we have received have said the blacklisted IP addresses were being used to pump out spam.
OK…Blood;291435 wrote:I have since discovered that when an IP is blacklisted it may be because one or more IP’s in a range of addresses assigned to a provider have been pumping out spam and so the monitoring organisation takes it upon themselves to blacklist the entire IP range and thus blacklist IP’s that are clean.
I can’t believe that anyone would use such a prehistoric method to deal with blacklisting an IP.
I certainly can.
It seems your organization is using an ISP or VPS provider that takes a laissez-faire approach to spammers. A number of such providers exist, particularly in the low-cost end of the spectrum. They make money by allowing spammers to buy services, and when the complaints come in, they close down the spammers accounts after a while, but do nothing to prevent the same spammers to re-register and purchase new services with new IP addresses.
These providers are the scourge of the Internet, and the only way to deal with them is to block their entire IP range. If your provider gets this treatment, there’s a reason for it. If you don’t want to become collateral damage in the war against spam, switch to a provider with a more reasonable anti-spam policy.Blood;291435 wrote:I have also discovered that usually it is just the IP address that is put on the blacklist, and not the domain name which is reassuring.
Domain names are only blacklisted when they’re used to host services related to illegal activities, such as (fake) websites pushing malware.Blood;291435 wrote:But, two years ago we were suddenly unable to send mail to a local government authority. All other mail was being received without a problem. I discovered that the authority was using a reputation list provided by McAfee and that this list included a single identification of one instance of spam being sent out from one of our website IP’s four years previously. The identification was another ‘false-positive’ but McAfee or the original provider had linked our domain name to the IP and blocked mail from our domain name. It took three days of painful bureaucracy to fix this.
McAfee hasn’t been considered a professional provider of, well, anything for at least a decade. Even John McAfee has done his best to distance himself from their products, and that’s a guy who openly admits to using drugs and bribing 3rd world officials.
For some reason, Intel recently bought McAfee. Let’s hope they clean up their products.Blood;291435 wrote:So, does anyone else have this problem?
A lot of people claim that RBLs have given them numerous problems. However, every time I’ve looked into such reports, the problem has turned out to be with the ISP, the VPS provider or the customer complaining about the RBLs.
For instance, if your IP address is on a list of dynamic IP addresses when the ISP has actually reassigned that range to customers with static addresses, it’s perfectly possible that the ISP has neglected to update the database.
(I actually ran into that exact problem once, and all it took to fix it was a phone call to the ISP and a mail to SpamHaus. The RBL providers have to rely on reports from spamtraps and data in WHOIS; they aren’t mind-readers.)Blood;291435 wrote:More importantly, are there professional reputation services that use intelligent methods to blacklist sites rather than knee-jerk responses.
SpamHaus and SpamCop certainly use intelligent methods. If you’re affected by their blacklists, chances are there’s a very good reason for it.Apr 23, 2015 at 6:35 pm in reply to: Managing off-site machines with only a corporate DC #388791
Re: Managing off-site machines with only a corporate DC
Why would it be a problem if all DNS traffic went through the VPN? The packets are tiny and the DNS cache on the clients will prevent them from sending unnecessary requests.
If this is really an issue, get a VPN router capable of zone-based DNS redirection, like the ZyWall USG series, and use that as a gateway on the remote site:
Apr 17, 2015 at 4:40 pm in reply to: Deleting the files and directories of an directory without deleting this root directo #388790
- Have the remote site clients use the router as a DNS server (the router’s built-in DHCP server can provide the parameters)
- Set up a client/IPsec VPN between the router and the corporate network
- Create two DNS forwarding rules: One forwarding requests for “ad.dns.name” (substitute the actual AD DNS zone name) to an internal DNS server, and another forwarding everything else to the ISP’s DNS server(s)
Re: Deleting the files and directories of an directory without deleting this root dirbalubeto;291407 wrote:I have tried in this way:Code:del /f /s /q “*”
but it does not work because all files are deleted but its nested subdirectories are not deleted. Why?
Because that’s what you told it to do. del will only delete files, never directories:
del /s directory* = “delete files matching the wildcard * in directory and any subdirectories”
To delete directories, empty or otherwise, try this instead: for /d %a in (*) do rd /s /q “%a”
The rd command doesn’t accept wildcards, so a for loop is necessary. Remember to use double percentage signs (%%a instead of %a) if you use for loops in a batch file.
Of course, rd won’t delete any files from the root directory, so an additional del /f /q * will be needed to get rid of those files as well.