Followup--lessons learned in a NANOG Context
Howard C. Berkowitz
hcb at clark.net
Tue Jan 7 16:27:11 UTC 1997
Let me, now that things have calmed down, try to relate some lessons
learned to the general operations environment. In a separate message, I
will also forward some traffic- and spam-related information, which
actually is relevant but has me laughing so hard I find it hard to even
write, much less talk. Poor victimized cyberpromo...their AUP was
violated...the evil spammers are out to get them...
The pace of events in the emergency did not allow for an explanation of how
the individual was located. Jon's comment below is a reasonable one, and,
with some further details of how the individual was located, I can:
1) Give at least a starting point for reasonable policies of disclosure
when a possible medical emergency exists,
2) Suggest that such situations might be reasonable things to have thought
about before an emergency, such that they can be put into a carrier's
internal operational procedures.
At 9:58 AM -0500 1/7/97, Jon Zeeff wrote:
>I'd like to point out that such things can be an invasion of privacy.
>While person A might claim that person B threatened to commit suicide,
>it is possible that person A wants to locate person B for other,
>not so good reasons.
>This will happen if all one has to say is "suicide" and everyone will
>ignore their normal privacy policies.
>> > > Thanks to everyone who responded. I was eventually able to reach one of
>> > > the providers, who was able to identify the callers through logs, and
>> > > passed the information to the local emergency people. The patient
>> > > under treatment, and did not take a lethal dose.
>> > I'd just like to point out the similarity between this event and the use
>> > of the phone company to track down suicide callers. This reminds me of
Ehud Gavron also commented:
>Can we just change the NANOG charter to "Let's do nothing useful for
>real problems that bother providers, but if someone on IRC says they
>took an overdose, or threatens to kill themselves, let's fall all over
>ourselves revealing private info"?
I personally consider both situations -- the provider and the individual --
within scope. I would like us to consider the general case in both
situations,with an eye to reasonable provider policies, as opposed to being
stuck in speeific cases.
1. Operational Details of the Case
In the specific case, the suicide message appeared primarily in a monitored
chat room, and secondarily in a private email. I did not myself see the
message in real time, but was called in shortly afterwards. Part of the
problem involved time zone differences -- both the person attempting
suicide and most of the providers were in Pacific time, and neither the
person's ISP nor the chat room had 24x7 coverage. The event was at
approximately 7:40 Eastern US time, four to five hours before the providers
involved opened their offices.
While my specific efforts focused on tracking back an email address to a
physical one, for lack of a better way to handle the situation, the actual
resolution came when the chat room operator was contacted, and given
specific text strings in the suicide message. Luckily, this operator has a
well-controlled, audited system, and was able to do a text search through
logged messages, and independenly verify that the threat was issued.
In other words, the chat operator did not depend on an unverified third
party statement that a threat had been issued. The operator also records
IP addresses associated with messages, so the operator now had a verified
message from a specific address. The provider for this address was
verified with inverse lookup.
Again luckily, this was an at least partially subscription-based chat room,
and the provider had a database of names (verified by credit card) and
email addresses for subscribers. The provider revealed by inverse lookup
above matched the provider on the subscriber's email address.
Obviously, a reasonably adept hacker could have worked around many of these
verifications. Obviously, in many other cases, there would not have been
subscription information that could be verified. In many respects,this was
an optimum case.
Based on what was considered verified information, the chat room operator
contacted local police in the subscriber's area, who sent an officer to the
home. A family member found the attempted suicide at approximately the
same time, and medical treatment initiated.
2. Potential Operational Considerations (see? NANOG tie-in)
Here's a start on an internal provider policy for dealing with requests to
deal with potential disclosure of privacy in a claimed emergency.
Content and transit providers may be contacted by individuals or
organizations seeking normally private information in the case of a
life-threatening emergency. The need here is to balance privacy against
other human values.
Basic principles of when to disclose information might include:
-- the person requesting the information must have a known and verified
-- in claimed medical emergencies, the person requesting information should
be asked if emergency services in the location of the person endangered
have been notified. Operations staff should request information by which
this notification can be verified.
-- in the case of content providers that might be able to retrieve the
actual message traffic of concern, the caller should be asked for
specific identifying information. This might extend to access providers
that could identify that a call was made to a given dialup server port
at a specific time, but obviously is impractical for transit providers.
comments and questions welcome. Obviously, local legal considerations will
apply. I don't have a telco trace authorization procedure, which could be
a good guideline.
More information about the NANOG