We have /ban at the moment, but I can see that it can be a bit strong if a user has only recently started being annoying.
Would it be better to have some kind of temporary ban/hide that means that the user is muted for everyone? That way, the annoying user is not encouraged, and hopefully they calm down.
After /ban our troll register new account shortly.
Yes, global /hide (user muted for everyone) without notification in sidebar ("User @someadministrator muted @sometroll in this room") for 24 hours is better solution.
On an individual basis, be able to ignore/mute/block user. Ideally, you would still be able to see that the muted user chatted but not see the message contents. This is in order to maintain context in case someone else interacts with them.
https://github.com/gitterHQ/gitter/issues/1093
Ideally, you would still be able to see that the muted user chatted but not see the message contents. This is in order to maintain context in case someone else interacts with them.
-1
– a precautionary downvote because if that is implemented as a placeholder for every ignored message, then the empty 'visual noise' will be not ideal.
Consider the XenForo approach to ignorance, where just once per page, below all other posts (near a naturally noisy, easily ignored standard footer), the reader is notified that there exists a post from an ignored user. Then optionally click just once to temporarily view that person's posts on that page alone. (In MacRumors Forums, that approach seems to work very well for everyone who uses the ignore feature. Certainly it works perfectly for me.)
As Gitter is not paginated, the same approach is not applicable.
As a separate enhancement, you might allow @ mentions to be excluded, by default, from the effects of ignorance.
We are currently trying out some alternatives to individuals muting other individuals.
"Easy" muting may stop the person from annoying you in a community, but chances are that they are annoying everyone else in the community as well. Ignoring them leaves the problem for everyone else.
A community is only as good as its members. If someone is making the community worse (and refuses to improve their behaviour), then they can be banned by the community admins with /ban @username.
tl;dr: root cause improvements coming, /ban @username until then.
@trevorah I can't ban trolls from channels I don't own.
Please implement block. Ignore is the normal, healthy way for communities to progressively ostracize trolls. Every network on Earth, no matter how often their founders decided they were going to invent a better, not-round wheel, has inevitably said "oh hey it turns out people need to be able to self-service."
sometimes the offending behavior is more subtle - the user may be a net positive to the community but targets one (or more) users with nuanced social jabs (not enough for a ban, but enough to warrant being ignored)
It would be nice to be able to get some help from the people at https://gitter.im/gitterHQ/gitter without having to see pages and pages of gay porn being posted by a troll.
My suggestion to the Gitter staff is to drop everything else and address this growing nuisance immediately, because it's hurting the service more than any other problem on your plate.
Many of us can no longer use Gitter at work (or even at home, depending on who's present) due to the risk of something horribly untoward popping-up on the screen at any given moment.
The direct links to CP are a serious and legitimate concern, because the Gitter client requests the files directly from the source when displaying the preview, making it appear as though the unwitting end-user requested the file in a normal browsing context.
At the minimum, disable the automatic previews for photos and videos, or better yet, make it a configurable option in the client. And don't request the files unless the user explicitly expands the content (i.e., no pre-loading)!
Even if it's just a bit of javascript that allows clients to right click on a name and pick "hide user". It should be absolutely trivial to apply a CSS style of display: none to any posts (existing or subsequent) by that user in the DOM. At least, I would think. It doesn't even need to persist from session to session. I haven't looked at the client code but how could this be more than like five minutes of dev time?
Hi folks, we've held of doing this for a long time because incidents were pretty isolated and we tried to deal with them centrally through other ways.
In the background, we've been building internal spam detection types of tools (I don't want to go into details, as that risks exposing logic and hence workarounds).
There have been two pretty nasty incidents recently and they are 100% not acceptable. As such we're going to look at implementing block in a smarter way (you can't just do a display:none, blocking needs to prevent notifications, emails, API, mobile messages etc..).
This will be a priority for us over the coming weeks.
Once we open source, sure we'll accept Pull Requests, but we aren't about to start grant access to the codebase and was hoping to have that done first, but this will take priority over any of the open sourcing work.
@StoneCypher the problem has accelerated - yes, I entirely agree we should have done it earlier before it became a big problem and I hold my hand up for that and apologise for it. As a small business with a small team, we had to focus on other issues to keep the product running and the business funded so that we could keep Gitter going for everyone.
In all seriousness though, it's not two lines of code, it really isn't, if it was purely a client-side concern like an installed IRC client - sure that's a lot simpler. But you are right, it's not rocket-science, but we will fix it.
In many ways, we've tried to be too clever for our own good here, we've tried to detect these messages and block them centrally - that way nobody has to act or block, nobody sees even the first unwelcome and ugly message, that way none of the completely publically accessible archives even get that content in the first place. Sadly, those efforts have failed.
@cbj4074 we wouldn't want to default that to on, so again you have the problem of others still seeing horrible things that I cannot unsee from the last week.
we have a good plan, we're going to do:
content reporting - i.e. you can report a message and we'll have some central logic that can then vanish it for everyone. we'll do this first, it's actually a lot simpler and will benefit the entire network. it won't, however, remove the message immediately, it will only do so once some logic kicks in, but we imagine it will be pretty fast if many people are reporting.
blocking as per this ticket. this will have the immediate effect of removing the content, but only for the blocker. we will do this once the above is done as it's a bit more complex and taxing on our infrastructure, which is under some heavy load at the moment and worrying us, but probably not noticeable to anyone else.
the blocking stuff is really not as simple as you think. i'll give you two other edge cases if you just do a display:none client-side hack
The client does an initial snapshot and gets 20-ish messages server-rendered to start with. when you then scroll up (literally, on scroll) it will go out to the server to pull more messages. Now imagine the last 20 messages were from the blocked user. you'd get a totally blank screen and the client wouldn't emit any events to go and collect messages from the server and so the spammer will have rendered the room completely unusable for you.
Most people have the default of unread message tracking. If those messages are marked as read when they are visible on the page. If the messages are display:none you have permanent unread counts that will never go away.
All mobile clients, API apps, etc... will all still get blocked content
You will receive notifications of blocked content
So we will have a solution for blocking users, it will just take a little more time to implement as we need to do it in an efficient way on the server.
blocking robs the attacker of motivation. they go away quickly once they're no longer able to get a rise by taunting.
sooner or later you'll get someone who generates new accounts to attack (this is already happening on other git-associated chat systems.) at that point rooms will also need to be able to say things like "your account has to be at least a month old to join."
those two small things in place - a user-maintained blocklist and room aging - should be enough to stop nearly all such attacks.
.
the blocking stuff is really not as simple as you think
it is when your goal is to deal with a long term problem user, rather than a single incident.
the first two of your edge cases are irc standards. the first one is called flooding. the second is called chanserv spamming (or msgserv spamming on very old networks like undernet.)
the third and fourth aren't "edge cases," but rather results of low quality implementations.
a sensible way to prevent #4 (closed) for example would be to load the user's total block list (which will generally be fewer than a dozen things) and then client-side check that something doesn't come from a blocked source. if it does, don't tick the counter, and don't show it (or remove it from the datastructure before further evalation, or filter on receipt, or whatever.)
a sensible way to prevent #3 (closed) is to finish the job and implement blocking on all of your clients.
server efficiency is kind of a non-concern. mass spamming attacks against freenode, involving thousands of concurrent fake users, didn't generate a single percentage uptick on the choopa server attacked's load profile.
it's weird that you're combining admitting that your previous overengineering blocked you for three years with what appears to be a batch of new engineering.
this is every bit as simple as it sounds.
give users a list with a fixed maximum number of slots somewhere in the hundreds, each entry of which is a single blocked other user.
allow users to add to that list.
allow users to remove from that list.
notify the client on changes to the list, in case of multijoin.
load the block list on initial join before taking any other messaging actions.
don't deliver something if it's from the badguy.
it's hard to imagine a real world infrastructure under which this would not be trivial.
then again, it's also hard to imagine an irc replacement that spent three years insisting it didn't need /ignore.
it's weird that you're combining admitting that your previous overengineering blocked you for three years with what appears to be a batch of new engineering.
No, that didn't entirely block us for three years, we chose to focus on other priorities as these were few and far between as isolated incidents. As we've grown more popular it's become a bigger concern and, recently with more disgusting content and I've held my hand up and apologised for not doing this sooner.
it is when your goal is to deal with a long term problem user, rather than a single incident.
The reporting functionality doesn't just remove a single piece of content, it deals with the long term user as well, for everyone.
As I've stated, we will do that first, it's a bit simpler and then we will do the per user block. The first is happening now and will ship next week, the blocking the following week.
and then client-side check that something doesn't come from a blocked source
here is the biggest misconception. the web app is not a straight up client-server model. in IRC, you send a message to the server, it distributes it to the clients, end of story, you never deal with the message ever again. at gitter, all the messages are "stored" in the cloud and processed every time someone views a room in addition to notifying connected clients of new messages.
no, it's not rocket science to do that in a server environment, and we will, it just requires a little more consideration and it only benefits individual users whereas the reporting will effectively remove the offending content for everyone and prevent the spammer/flooder from doing it again.
blocking robs the attacker of motivation. they go away quickly once they're no longer able to get a rise by taunting.
I think there are also two separate concerns here, one is annoying users getting a rise out of people. the other, and the one we're focused on, is people posting really revolting stuff that nobody should ever have to see.
the latter can be so vile that i believe it's more important to address first so nobody else sees it. the content will go away quickly, the user won't be allowed to post at all any more and you have the same outcome that they lose motivation.
Lesser concerns include https://github.com/gitterHQ/gitter/issues/370#issuecomment-268449926 above, which I would treat as nice to have but very low priority because such messages are not offensive; they're simply noisy in spaces where I might prefer quiet. Without pointing the finger at any room: where I prefer chatting with humans, an excess of posting by bots is not welcomed by me.
@mydigitalself will it help to keep this issue concise, long term, if I spin off my far less important use case into a separate issue?
I think there probably is 2 separate things going on here, the original OP seemed to want a way to just mute users.
Muting
e.g In certain channels, theres always the same 5 users asking stupid ass questions without helping theirselves - so in the most simple way users just want to "mute" them.
So to that effect all the notification channels would just need to check that the notification isn't regarding a muted user.
Im sure most people would be satisfied with a "collapsed" state of the message, so you see the users name and a little badge next to it saying something like "Muted user - click to read" which would expand that single message.
I think this is the most simple approach, as it just requires a new relationship and a notification filter and UI filter
Spam
For this do what you must to stop malicious users, this would be a priority, keeping people "safe" over sanity, I'm sure you guys have a plan :)
TLDR
A basic muting system to be rolled out with minimal features - which can be expanded over time, but start with getting the MVP of this feature out. Once of course spam/malicious posting has been resolved.