Facebook has recently released some interesting data from it’s ‘The Facebook Immunity System (FIS)’. According to FIS it processes and checks 650,000 actions every second (it can handle 25 billion actions every day – amazing) to maintain user safety from spam (The FIS reports just 1% of users reporting issues around spam) and other cyber related attacks.
Facebook has developed the FIS system (using a signature) that is able to differentiate between spam and legitimate messages (as well as ‘creepers’ – those who use Facebook but cause problems for others) for example basing on the links in spam messages, keywords and IP addresses. Spammers can beat this by using shortened URL services and switching systems (which switches IP addresses). When this happens the system relies on keyword scanning aka blacklist of words i.e. “iPad” or “free” are two common keywords.
Statistic: Since the introduction of FIS some three years ago, spam accounts for less than 4 percent of the total messages on Facebook.
The FIS team is supported by some 30 security experts who manually search for spam across the Facebook network with one particular threat being posed by socialbots. These are fake profile bots that behave like you or me on Facebook. Socialbots will aim to connect with as many ‘friends’ as possible in an attempt to friend users into obtaining access to your Facebook profile data. Socialbots are very difficult to detect, so the FIS has to rely on the security experts to identify the potential threats.
Statistic: FIS is probably the second largest defence system outside of the Web itself. It’s a staggering size considering the 800m+ people that use it daily.
It’s worth pointing out that a socialbot is yet to happen, however it’s only a matter of time before we see this or other similar innovations. As you know by now, FIS relies on patterns of known behaviour (aka HIPS model) rather than behaviour analysis. The FIS policy and classifier engines offer clear opportunities for future development as well as development of specification-based behavioural analysis policies rather than the current anomaly model that Facebook uses.
Facebook has developed the FIS system (using a signature) that is able to differentiate between spam and legitimate messages (as well as ‘creepers’ – those who use Facebook but cause problems for others) for example basing on the links in spam messages, keywords and IP addresses. Spammers can beat this by using shortened URL services and switching systems (which switches IP addresses). When this happens the system relies on keyword scanning aka blacklist of words i.e. “iPad” or “free” are two common keywords.
Statistic: Since the introduction of FIS some three years ago, spam accounts for less than 4 percent of the total messages on Facebook.
The FIS team is supported by some 30 security experts who manually search for spam across the Facebook network with one particular threat being posed by socialbots. These are fake profile bots that behave like you or me on Facebook. Socialbots will aim to connect with as many ‘friends’ as possible in an attempt to friend users into obtaining access to your Facebook profile data. Socialbots are very difficult to detect, so the FIS has to rely on the security experts to identify the potential threats.
Statistic: FIS is probably the second largest defence system outside of the Web itself. It’s a staggering size considering the 800m+ people that use it daily.
It’s worth pointing out that a socialbot is yet to happen, however it’s only a matter of time before we see this or other similar innovations. As you know by now, FIS relies on patterns of known behaviour (aka HIPS model) rather than behaviour analysis. The FIS policy and classifier engines offer clear opportunities for future development as well as development of specification-based behavioural analysis policies rather than the current anomaly model that Facebook uses.
Tidak ada komentar:
Posting Komentar