In that case following Alice's input is still the best strategy, but you'll be worse off: you'd only be right if both tell the truth, at 80%80%=64%, or both lie, at 20%20%=4%, for a total of 68%.
In the general case of n intermediate occasional liars, the odds of the final result being accurate goes to 50% as n grows large, which makes sense, as it will have no correlation anymore to the initial input.
Thanks. I came up with this Python simulation that matches your 68%:
import random
def lying_flippers(num_flips=1_000_000):
"""
- Bob flips a coin and tells Alice the result but lies 20% of the
time.
- Alice tells me Bob's result but also lies 20% of the time.
- If I trust Bob, I know I'll be correct 80% of the time.
- If I trust Alice, how often will I be correct (assuming I don't
know Bob's result)?
"""
# Invert flip 20% of the time.
def maybe_flip_flip(flip: bool):
if random.random() < 0.2:
return not flip
return flip
def sum_correct(actual, altered):
return sum(1 if a == b else 0 for (b, a) in zip(actual, altered))
half_num_flips = num_flips // 2
twenty_percent = int(num_flips * 0.2)
actual_flips = [random.choice((True, False)) for _ in range(num_flips)]
num_heads = sum(actual_flips)
num_tails = num_flips - num_heads
print(f"Heads = {num_heads} Tails = {num_tails}")
bob_flips = [maybe_flip_flip(flip) for flip in actual_flips]
alice_flips = [maybe_flip_flip(flip) for flip in bob_flips]
bob_num_correct = sum_correct(actual_flips, bob_flips)
bob_percent_correct = bob_num_correct / num_flips
alice_num_correct = sum_correct(actual_flips, alice_flips)
alice_percent_correct = alice_num_correct / num_flips
# Trusting Bob should lead to being correct ~80% of the time.
# This is just a verification of the model since we already know the answer.
print(f"Trust Bob -> {bob_percent_correct:.1%}")
# Trusting Alice should lead to being correct ?% of the time.
# This model produces 68%.
print(f"Trust Alice -> {alice_percent_correct:.1%}")
print()